Introducing Chain of Thought, the podcast for software engineers and leaders that demystifies artificial intelligence.
Join us each week as we tell the storie...
The Making of Gemini 2.0: DeepMind's Approach to AI Development and Deployment | Logan Kilpatrick
Google’s strength in AI has often seemed to get lost in the midst of OpenAI announcements or DeepSeek fervor - yet Gemini 2.0 is more than good for many tasks; it’s the model to beat - and we have the research to back it up. This week, Logan Kilpatrick, senior product manager at Google DeepMind, joins us to discuss Gemini’s creation story, its emergence as the premiere model in the AI race, and why the launch of Gemini 2.0 is great news for developers.During the conversation Conor and Logan explore the exciting world of multimodal AI, Gemini's strengths in agentic use cases, and its unique approach to function calling, compositional function calling, and the seamless integration of tools like search and code execution.They also chat about Logan’s vision for a future where AI interacts with the world more naturally, offering a view of the potential of vision-first AI agents, and why Google's hardware advantage is enabling Gemini's impressive performance and long context capabilities. Follow along with the discussion using Galileo’s AI Agent Leaderboard:https://huggingface.co/spaces/galileo-ai/agent-leaderboardChapters:00:00 DeepMind's Role in Gemini's Development03:49 Gemini 2.0 Updates and Developer Highlights06:08 Agentic Use Cases and Function Calling11:29 Multimodal Capabilities16:15 Putting AI in Production21:06 Gemini's Differentiation and Hardware31:22 Future Vision for Gemini and G Suite Integration35:23 Gemini for Developers39:02 Conclusion and FarewellFollow the hostsFollowAtinFollowConorFollowVikramFollowYashFollow LoganTwitter:@OfficialLoganKLinkedIn:https://www.linkedin.com/in/logankilpatrick/Show NotesTry Gemini for yourself:gemini.google.comGemini for Developers:aistudio.google.comCheck out GalileoTry Galileo
--------
40:32
DeepSeek Fallout, Export Controls & Agentic Evals
This week, hosts Conor Bronsdon and Atindriyo Sanyal discuss the fallout from DeepSeek's groundbreaking R1 model, its impact on the open-source AI landscape, and how its release will impact model development moving forward. They also discuss what effect (if any) export controls have had on AI innovation and whether we’re witnessing the rise of “Agents as a Service”.
To tackle the increasing complexity of agentic systems, Conor and Atin highlight the need for robust evaluation frameworks, discussing the challenges of measuring agent performance, and how the recent launch of Galileo's agentic evaluations are empowering developers to build safer and more effective AI agents.
Chapters:
00:00 Introduction
02:09 DeepSeek's Impact and Innovations
03:43 Open Source AI and Industry Implications
13:44 Export Controls and Global AI Competition
18:55 Software as a Service
19:29 Agentic Evaluations
25:14 Metrics for Success
31:34 Conclusion and Farewell
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Check out Galileo
Try Galileo
Show Notes
On DeepSeek and Export Controls
Introducing Agentic Evaluations
--------
32:41
AI, Open Source & Developer Safety | Block’s Rizel Scarlett
As DeepSeek so aptly demonstrated, AI doesn’t need to be closed source to be successful.
This week, Rizel Scarlett, a Staff Developer Advocate at Block, joins Conor Bronsdon to discuss the intersections between AI, open source, and developer advocacy. Rizel shares her journey into the world of AI, her passion for empowering developers, and her work on Block's new AI initiative, Goose, an on-machine developer agent designed to automate engineering tasks and enhance productivity.
Conor and Rizel also explore how AI can enable psychological safety, especially for junior developers. Building on this theme of community, they also dive into topics such as responsible AI development, ethical considerations in AI, and the impact of community involvement when building open source developer tools.
Chapters:
00:00 Rizel's Role at Block
02:41 Introducing Goose: Block's AI Agent
06:30 Psychological Safety and AI for Developers
11:24 AI Tools and Team Dynamics
17:28 Open Source AI and Community Involvement
25:29 Future of AI in Developer Communities
27:47 Responsible and Ethical Use of AI
31:34 Conclusion
Follow
Conor Bronsdon: https://www.linkedin.com/in/conorbronsdon/
Rizel Scarlett
LinkedIn: https://www.linkedin.com/in/rizel-bobb-semple/
Website: https://blackgirlbytes.dev/
Show Notes
Learn more about Goose: https://block.github.io/goose/
--------
33:43
AI in 2025: Agents & The Rise of Evaluation Driven Development
"In the next three to five years, every piece of software that is built on this planet will have some sort of AI baked into it." - Atin Sanyal
Chain of Thought is back for its second season, and this episode dives headfirst into the possibilities AI holds for 2025 and beyond. Join Conor Bronson as he chats with Galileo co-founders Yash Sheth (COO) and Atindriyo Sanyal (CTO) about major trends to look for this year. These include AI finding its product "tool stack" fit, generation latency decreasing, AI agents, their potential to revolutionize code generation and other industries, and the crucial role of robust evaluation tools in ensuring the responsible and effective deployment of these agents.
Yash and Atin also highlight Galileo's focus on building trust and security in AI applications through scalable evaluation intelligence. They emphasize the importance of quantifying application behavior, enforcing metrics in production, and adapting to the evolving needs of AI development.
Finally, they discuss Galileo's vision for the future and their active pursuit of partnerships in 2025 to contribute to a more reliable and trustworthy AI ecosystem.
Chapters:
00:00 AI Trends and Predictions for 2025
02:55 Advancements in LLMs and Code Generation
05:16 Challenges and Opportunities in AI Development
10:40 Evaluating AI Agents and Applications
16:07 Building Evaluation Intelligence
23:41 Research Opportunities
29:50 Advice for Leveraging AI in 2025
32:00 Closing Remarks
Show Notes:
Check out Galileo
Follow Yash
Follow Atin
Follow Conor
--------
33:13
Now is the Time to Build | Weaviate’s Bob van Luijt
"This is the time. This is the time to start building... I can't say that often enough. This is the time." - Bob van Luijt
Join Bob van Luijt, CEO and co-founder of Weaviate as he sits down with our host Conor Bronson for the Season 2 premiere of Chain of Thought. Together, they explore the ever-evolving world of AI infrastructure and the evolution of Retrieval-Augmented Generation (RAG) architecture.
Bob's journey with Weaviate offers a compelling example of how to adapt to rapid changes in the AI landscape. He discusses the importance of understanding developer needs and building AI-native solutions, emphasizing the potential of generative feedback loops and agent architectures to revolutionize data management.
Chapters:
00:00 Welcome to Season 2
1:43 The Evolution of AI Infrastructure
04:13 Navigating Rapid Changes in AI
07:39 Generative Feedback Loops and AI Native Databases
13:26 Challenges and Opportunities in AI Production
19:03 The Importance of Documentation and Developer Experience
27:13 Future Predictions and Paradigm Shifts in AI
31:17 Final Thoughts and Encouragement to Build
Follow:
Conor Bronsdon: https://www.linkedin.com/in/conorbronsdon/
Bob van Luijt: https://www.linkedin.com/in/bobvanluijt/
Weaviate: https://www.linkedin.com/company/weaviate-io/
Show notes:
Learn more about Weaviate: https://weaviate.io/
Introducing Chain of Thought, the podcast for software engineers and leaders that demystifies artificial intelligence.
Join us each week as we tell the stories of the people building the AI revolution, unravel actionable strategies and share practical techniques for building effective GenerativeAI applications.
Listen to Chain of Thought, The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app