Host Caleb Sima and Ashish Rajan caught up with experts Daniel Miessler (Unsupervised Learning), Joseph Thacker (Principal AI Engineer, AppOmni) to talk about the true vulnerabilities of AI applications, how prompt injection is evolving, new attack vectors through images, audio, and video and predictions for AI-powered hacking and its implications for enterprise security.
Whether you're a red teamer, a blue teamer, or simply curious about AI's impact on cybersecurity, this episode is packed with expert insights, practical advice, and future forecasts. Don’t miss out on understanding how attackers leverage AI to exploit vulnerabilities—and how defenders can stay ahead.
Questions asked:
(00:00) Introduction
(02:11) A bit about Daniel Miessler
(02:22) A bit about Rez0
(03:02) Intersection of Red Team and AI
(07:06) Is red teaming AI different?
(09:42) Humans or AI: Better at Prompt Injection?
(13:32) What is a security vulnerability for a LLM?
(14:55) Jailbreaking vs Prompt Injecting LLMs
(24:17) Whats new for Red Teaming with AI?
(25:58) Prompt injection in Multimodal Models
(27:50) How Vulnerable are AI Models?
(29:07) Is Prompt Injection the only real threat?
(31:01) Predictions on how prompt injection will be stored or used
(32:45) What’s changed in the Bug Bounty Toolkit?
(35:35) How would internal red teams change?
(36:53) What can enterprises do to protect themselves?
(41:43) Where to start in this space?
(47:53) What are our guests most excited about in AI?
Resources
Daniel's Webpage - Unsupervised Learning
Joseph's Website
--------
51:24
The Current State of AI and the Future for CyberSecurity in 2024
In this jam-packed episode, with our panel we explored the current state and future of AI in the cybersecurity landscape. Hosts Caleb Sima and Ashish Rajan were joined by industry leaders Jason Clinton (CISO, Anthropic), Kristy Hornland (Cybersecurity Director, KPMG) and Vijay Bolina (CISO, Google DeepMind) to dive into the critical questions surrounding AI security.
We’re at an inflection point where AI isn’t just augmenting cybersecurity—it’s fundamentally changing the game. From large language models to the use of AI in automating code writing and SOC operations, this episode examines the most significant challenges and opportunities in AI-driven cybersecurity. The experts discuss everything from the risks of AI writing insecure code to the future of multimodal models communicating with each other, raising important questions about trust, safety, and risk management.
For anyone building a cybersecurity program in 2024 and beyond, you will find this conversation valuable as our panelist offer key insights into setting up resilient AI strategies, managing third-party risks, and navigating the complexities of deploying AI securely. Whether you're looking to stay ahead of AI's integration into everyday enterprise operations or explore advanced models, this episode provides the expert guidance you need
Questions asked:
(00:00) Introduction
(02:28) A bit about Kristy Hornland
(02:50) A bit about Jason Clinton
(03:08) A bit about Vijay Bolina
(04:04) What are frontier/foundational models?
(06:13) Open vs Closed Model
(08:02) Securing Multimodal models and inputs
(12:03) Business use cases for AI use
(13:34) Blindspots with AI Security
(27:19) What is RPA?
(27:47) AI’s talking to other AI’s
(32:31) Third Party Risk with AI
(38:42) Enterprise view of risk with AI
(40:30) CISOs want Visibility of AI Usage
(45:58) Third Party Risk Management for AI
(52:58) Starting point for AI in cybersecurity program
(01:02:00) What the panelists have found amazing about AI
--------
1:16:34
What is AI Native Security?
In this episode of the AI Cybersecurity Podcast, Caleb and Ashish sat down with Vijay Bolina, Chief Information Security Officer at Google DeepMind, to explore the evolving world of AI security. Vijay shared his unique perspective on the intersection of machine learning and cybersecurity, explaining how organizations like Google DeepMind are building robust, secure AI systems.
We dive into critical topics such as AI native security, the privacy risks posed by foundation models, and the complex challenges of protecting sensitive user data in the era of generative AI. Vijay also sheds light on the importance of embedding trust and safety measures directly into AI models, and how enterprises can safeguard their AI systems.
Questions asked:
(00:00) Introduction
(01:39) A bit about Vijay
(03:32) DeepMind and Gemini
(04:38) Training data for models
(06:27) Who can build an AI Foundation Model?
(08:14) What is AI Native Security?
(12:09) Does the response time change for AI Security?
(17:03) What should enterprise security teams be thinking about?
(20:54) Shared fate with Cloud Service Providers for AI
(25:53) Final Thoughts and Predictions
--------
27:48
BlackHat USA 2024 AI Cybersecurity Highlights
What were the key AI Cybersecurity trends at BlackHat USA? In this episode of the AI Cybersecurity Podcast, hosts Ashish Rajan and Caleb Sima dive into the key insights from Black Hat 2024. From the AI Summit to the CISO Summit, they explore the most critical themes shaping the cybersecurity landscape, including deepfakes, AI in cybersecurity tools, and automation. The episode also features discussions on the rising concerns among CISOs regarding AI platforms and what these mean for security leaders.
Questions asked:
(00:00) Introduction
(02:49) Black Hat, DEF CON and RSA Conference
(07:18) Black Hat CISO Summit and CISO Concerns
(11:14) Use Cases for AI in Cybersecurity
(21:16) Are people tired of AI?
(21:40) AI is mostly a side feature
(25:06) LLM Firewalls and Access Management
(28:16) The data security challenge in AI
(29:28) The trend with Deepfakes
(35:28) The trend of pentest automation
(38:48) The role of an AI Security Engineer
--------
46:56
Our insights from Google's AI Misuse Report
In this episode of the AI Cybersecurity Podcast, we dive deep into the latest findings from Google's DeepMind report on the misuse of generative AI. Hosts Ashish and Caleb explore over 200 real-world cases of AI misuse across critical sectors like healthcare, education, and public services. They discuss how AI tools are being used to create deepfakes, fake content, and more, often with minimal technical expertise. They analyze these threats from a CISO's perspective but also include an intriguing comparison between human analysis and AI-generated insights using tools like ChatGPT and Anthropic's Claude. From the rise of AI-powered impersonation to the manipulation of public opinion, this episode uncovers the real dangers posed by generative AI in today’s world.
Questions asked:
(00:00) Introduction
(03:39) Generative Multimodal Artificial Intelligence
(09:16) Introduction to the report
(17:07) Enterprise Compromise of GenAI systems
(20:23) Gen AI Systems Compromise
(27:11) Human vs Machine
Resources spoken about during the episode:
Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data