Join Paul Canetti, CEO of Skej, as he discusses the unique challenges of building AI products that operate without traditional user interfaces, instead functioning as virtual humans with email addresses, phone numbers, and Slack handles that interact through natural language. Drawing from his experience in UX design at Apple during the iPhone era, Canetti explains how building non-deterministic AI systems fundamentally differs from traditional software, requiring multiple quality assurance layers to prevent hallucinations and ensure AI assistants know when to remain silent in group conversations. He explores the shift toward anthropomorphized AI assistants with distinct personalities, arguing that as forms become obsolete and natural language interfaces become mainstream, the future lies in liberating people to do uniquely human work while AI handles generic tasks that anyone could accomplish but everyone suffers through.
-------- Â
38:44
--------
38:44
Interview #76 Zachary Hanif, VP of AI ML at Twilio
Join Zachary Hanif, VP of Data and AI at Twilio, as he discusses the fundamental differences between building AI systems in regulated financial services versus communication platforms, drawing from his experience at Capital One to implement rigorous model governance frameworks that reduce maintenance costs while accelerating development timelines. Hanif addresses the critical balance between explainable AI and high-performing black box models, emphasizing that organizations must identify where their use cases fall on the explainability spectrum rather than applying blanket requirements. He explores privacy-by-design principles for real-time AI systems, the challenge of moving from proof-of-concept to production (with 80% of AI pilots failing), and provides a practical framework for successful AI implementation that includes clear objective criteria, close collaboration between technical teams and domain experts, and properly tempered expectations for experimental development timelines.
-------- Â
26:20
--------
26:20
Interview #75 Santosh Kaveti, CEO of ProArch
Join Santosh Kaveti, CEO of ProArch, as he addresses the critical gap between AI ambition and execution in enterprise environments, where despite widespread C-suite commitment, only a quarter of organizations achieve meaningful AI implementation. Kaveti outlines his four-pillar framework for AI operationalization, emphasizing that AI adoption is fundamentally a people and culture problem rather than a technology issue, with 63% of companies lacking basic AI governance policies. He discusses the growing challenges of shadow AI usage, the convergence of IT and operational technology creating new security vulnerabilities in critical infrastructure, and how organizations can build compliance frameworks that won't become obsolete as AI regulations continue to evolve rapidly.
-------- Â
28:17
--------
28:17
Interview #74 Suman Kanuganti, CEO of Personal AI
Join Suman Kanuganti, CEO of Personal AI, as he discusses the shift away from the one-size-fits-all approach of large language models toward specialized personal language models that capture individual decision-making patterns and expertise. Kanuganti explains how artificial personal intelligence differs from artificial general intelligence, focusing on creating AI personas that can run efficiently on edge devices rather than requiring massive cloud infrastructure while maintaining privacy-by-design architecture. He examines the future of distributed AI systems and how smaller, specialized models can deliver superior performance for specific use cases while addressing the fundamental scalability and cost challenges facing the current AI industry dominated by power-hungry large language models.
-------- Â
28:37
--------
28:37
Interview #73 Jay Dawani, CEO of Lemurian Labs
Join Jay Dawani, CEO of Lemurian Labs, as he discusses the critical infrastructure challenges facing AI development and his company's efforts to rebuild the AI software stack from the ground up. Drawing from his experience as a former NASA AI advisor working on Mars Rover navigation and exoplanet research, Dawani explains how current AI systems are plagued by massive inefficiencies, with some data centers operating at only 10-15% utilization despite consuming enormous amounts of energy. The conversation explores how the industry must shift from kernel-based programming models designed for single GPUs to dynamic runtime systems that can efficiently manage communication and memory across hundreds of thousands of processors, ultimately making advanced AI more accessible and sustainable.