905: Why RAG Makes LLMs Less Safe (And How to Fix It), with Bloomberg’s Dr. Sebastian Gehrmann
RAG LLMs are not safer: Sebastian Gehrmann speaks to Jon Krohn about his latest research into how retrieval-augmented generation (RAG) actually makes LLMs less safe, the three ‘H’s for gauging the effectivity and value of a RAG, and the custom guardrails and procedures we need to use to ensure our RAG is fit-for-purpose and secure. This is a great episode for anyone who wants to know how to work with RAG in the context of LLMs, as you’ll hear how to select the best model for purpose, useful approaches and taxonomies to keep your projects secure, and which models he finds safest when RAG is applied.
Additional materials: www.superdatascience.com/905
This episode is brought to you by, Adverity, the conversational analytics platform and by the Dell AI Factory with NVIDIA.
Interested in sponsoring a SuperDataScience Podcast episode? Email
[email protected] for sponsorship information.
In this episode you will learn:
(03:28) Findings from the paper “RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models”
(09:35) What attack surfaces are in the context of AI
(38:51) Small versus large models with RAG
(46:27) How to select an LLM with safety in mind