What is generative AI? How do you create safe and capable models? Is AI overhyped? Join mathematician and broadcaster Professor Hannah Fry as she answers these ...
Gemini 2.0 and the evolution of agentic AI with Oriol Vinyals
In this episode, Hannah is joined by Oriol Vinyals, VP of Drastic Research and Gemini co-lead. They discuss the evolution of agents from single-task models to more general-purpose models capable of broader applications, like Gemini. Vinyals guides Hannah through the two-step process behind multi modal models: pre-training (imitation learning) and post-training (reinforcement learning). They discuss the complexities of scaling and the importance of innovation in architecture and training processes. They close on a quick whirlwind tour of some of the new agentic capabilities recently released by Google DeepMind. Note: To see all of the full length demos, including unedited versions, and other videos related to Gemini 2.0 head to YouTube.Future reading/watching: Gemini 2.0 Decoding Google Gemini with Jeff DeanGaming, Goats & General Intelligence with Frederic BesseThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind— Subscribe to our YouTube channel Find us on XFollow us on InstagramAdd us on Linkedin
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
--------
49:25
The Balancing Act: Regulation & AI with Nicklas Lundblad
There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphasises the importance of a nuanced approach to regulation, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction. Further reading/watching:AI Principles: https://ai.google/responsibility/principles/Frontier Model Forum: https://blog.google/outreach-initiatives/public-policy/google-microsoft-openai-anthropic-frontier-model-forum/Ethics of AI assistants with Iason Gabriel https://youtu.be/aaZc-as-soA?si=0ThbYY30FlO31kKQThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Bernardo ResendeAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
--------
53:10
Inside NotebookLM with Raiza Martin and Steven Johnson
NotebookLM is a research assistant powered by Gemini that draws on expertise from storytelling to present information in an engaging way. It allows users to upload their own documents and generate insights, explanations, and—more recently—podcasts. This feature, also known as audio overviews, has captured the imagination of millions of people worldwide, who have created thousands of engaging podcasts ranging from personal narratives to educational explainers using source materials like CVs, personal journals, sales decks, and more.Join Raiza Martin and Steven Johnson from Google Labs, Google’s testing ground for products, as they guide host Hannah Fry through the technical advancements that have made NotebookLM possible. In this episode they'll explore what it means to be interesting, the challenges of generating natural-sounding speech, as well as exciting new modalities on the horizon.Further readingTry NotebookLM hereRead about the speech generation technology behind Audio Overveiws: https://deepmind.google/discover/blog/pushing-the-frontiers-of-audio-generation/Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale Studios Commissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Alex Baro Cayetano, Daniel Lazard Video Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
--------
44:18
AI for Science with Sir Paul Nurse, Demis Hassabis, Jennifer Doudna, and John Jumper
Join Professor Hannah Fry at the AI for Science Forum for a fascinating conversation with Google DeepMind CEO Demis Hassabis. They explore how AI is revolutionizing scientific discovery, delving into topics like the nuclear pore complex, plastic-eating enzymes, quantum computing, and the surprising power of Turing machines. The episode also features a special 'ask me anything' session with Nobel Laureates Sir Paul Nurse, Jennifer Doudna, and John Jumper, who answer audience questions about the future of AI in science.Watch the episode here, and catch up on all of the sessions from the AI for Science Forum here.
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
--------
54:23
The Ethics of AI Assistants with Iason Gabriel
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies. Timecodes: 00:00 Intro01:13 Definition of AI assistants04:05 A utopic view06:25 Iason’s background07:45 The Ethics of Advanced AI Assistants paper13:06 Anthropomorphism14:07 Turing perspective15:25 Anthropomorphism continued20:02 The value alignment question24:54 Deception27:07 Deployed at scale28:32 Agentic inequality31:02 Unfair outcomes34:10 Coordinated systems37:10 A new paradigm38:23 Tetradic value alignment41:10 The future42:41 Reflections from HannahThanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah FrySeries Producer: Dan HardoonEditor: Rami Tzabar, TellTale StudiosCommissioner & Producer: Emma YousifMusic composition: Eleni ShawCamera Director and Video Editor: Daniel LazardAudio Engineer: Perry RogantinVideo Studio Production: Nicholas DukeVideo Editor: Bilal MerhiVideo Production Design: James BartonVisual Identity and Design: Eleanor TomlinsonProduction support: Mo DawoudCommissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
What is generative AI? How do you create safe and capable models? Is AI overhyped? Join mathematician and broadcaster Professor Hannah Fry as she answers these questions and more in the highly-praised and award-winning podcast from Google DeepMind.
In this series, Hannah goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.