Powered by RND
PodcastsTechnologyAI CyberSecurity Podcast
Listen to AI CyberSecurity Podcast in the App
Listen to AI CyberSecurity Podcast in the App
(524)(250,057)
Save favourites
Alarm
Sleep timer

AI CyberSecurity Podcast

Podcast AI CyberSecurity Podcast
Kaizenteq Team
AI Cybersecurity simplified for CISOs and CyberSecurity Professionals.
More

Available Episodes

5 of 18
  • The Current State of AI and the Future for CyberSecurity in 2024
    In this jam-packed episode, with our panel we explored the current state and future of AI in the cybersecurity landscape. Hosts Caleb Sima and Ashish Rajan were joined by industry leaders Jason Clinton (CISO, Anthropic), Kristy Hornland (Cybersecurity Director, KPMG) and Vijay Bolina (CISO, Google DeepMind) to dive into the critical questions surrounding AI security. We’re at an inflection point where AI isn’t just augmenting cybersecurity—it’s fundamentally changing the game. From large language models to the use of AI in automating code writing and SOC operations, this episode examines the most significant challenges and opportunities in AI-driven cybersecurity. The experts discuss everything from the risks of AI writing insecure code to the future of multimodal models communicating with each other, raising important questions about trust, safety, and risk management. For anyone building a cybersecurity program in 2024 and beyond, you will find this conversation valuable as our panelist offer key insights into setting up resilient AI strategies, managing third-party risks, and navigating the complexities of deploying AI securely. Whether you're looking to stay ahead of AI's integration into everyday enterprise operations or explore advanced models, this episode provides the expert guidance you need Questions asked: (00:00) Introduction (02:28) A bit about Kristy Hornland (02:50) A bit about Jason Clinton (03:08) A bit about Vijay Bolina (04:04) What are frontier/foundational models? (06:13) Open vs Closed Model (08:02) Securing Multimodal models and inputs (12:03) Business use cases for AI use (13:34) Blindspots with AI Security (27:19) What is RPA? (27:47) AI’s talking to other AI’s (32:31) Third Party Risk with AI (38:42) Enterprise view of risk with AI (40:30) CISOs want Visibility of AI Usage (45:58) Third Party Risk Management for AI (52:58) Starting point for AI in cybersecurity program (01:02:00) What the panelists have found amazing about AI
    --------  
    1:16:34
  • What is AI Native Security?
    In this episode of the AI Cybersecurity Podcast, Caleb and Ashish sat down with Vijay Bolina, Chief Information Security Officer at Google DeepMind, to explore the evolving world of AI security. Vijay shared his unique perspective on the intersection of machine learning and cybersecurity, explaining how organizations like Google DeepMind are building robust, secure AI systems. We dive into critical topics such as AI native security, the privacy risks posed by foundation models, and the complex challenges of protecting sensitive user data in the era of generative AI. Vijay also sheds light on the importance of embedding trust and safety measures directly into AI models, and how enterprises can safeguard their AI systems. Questions asked: (00:00) Introduction (01:39) A bit about Vijay (03:32) DeepMind and Gemini (04:38) Training data for models (06:27) Who can build an AI Foundation Model? (08:14) What is AI Native Security? (12:09) Does the response time change for AI Security? (17:03) What should enterprise security teams be thinking about? (20:54) Shared fate with Cloud Service Providers for AI (25:53) Final Thoughts and Predictions
    --------  
    27:48
  • BlackHat USA 2024 AI Cybersecurity Highlights
    What were the key AI Cybersecurity trends at ⁠BlackHat USA⁠? In this episode of the AI Cybersecurity Podcast, hosts ⁠Ashish Rajan⁠ and ⁠Caleb Sima⁠ dive into the key insights from Black Hat 2024. From the AI Summit to the CISO Summit, they explore the most critical themes shaping the cybersecurity landscape, including deepfakes, AI in cybersecurity tools, and automation. The episode also features discussions on the rising concerns among CISOs regarding AI platforms and what these mean for security leaders. Questions asked: (00:00) Introduction (02:49) Black Hat, DEF CON and RSA Conference (07:18) Black Hat CISO Summit and CISO Concerns (11:14) Use Cases for AI in Cybersecurity (21:16) Are people tired of AI? (21:40) AI is mostly a side feature (25:06) LLM Firewalls and Access Management (28:16) The data security challenge in AI (29:28) The trend with Deepfakes (35:28) The trend of pentest automation (38:48) The role of an AI Security Engineer
    --------  
    46:56
  • Our insights from Google's AI Misuse Report
    In this episode of the AI Cybersecurity Podcast, we dive deep into the latest findings from Google's DeepMind report on the misuse of generative AI. Hosts Ashish and Caleb explore over 200 real-world cases of AI misuse across critical sectors like healthcare, education, and public services. They discuss how AI tools are being used to create deepfakes, fake content, and more, often with minimal technical expertise. They analyze these threats from a CISO's perspective but also include an intriguing comparison between human analysis and AI-generated insights using tools like ChatGPT and Anthropic's Claude. From the rise of AI-powered impersonation to the manipulation of public opinion, this episode uncovers the real dangers posed by generative AI in today’s world. Questions asked: (00:00) Introduction (03:39) Generative Multimodal Artificial Intelligence (09:16) Introduction to the report (17:07) Enterprise Compromise of GenAI systems (20:23) Gen AI Systems Compromise (27:11) Human vs Machine Resources spoken about during the episode: Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data
    --------  
    33:46
  • AI Code Generation - Security Risks and Opportunities
    How much can we really trust AI-generated code more over Human generated Code today? How does AI-Generated code compare to Human generated code in 2024? Caleb and Ashish spoke to Guy Podjarny, Founder and CEO at Tessl about the evolving world of AI generated code, the current state and future trajectory of AI in software development. They discuss the reliability of AI-generated code compared to human-generated code, the potential security risks, and the necessary precautions organizations must take to safeguard their systems. Guy has also recently launched his own podcast with Simon Maple called The AI Native Dev, which you can check out if you are interested in hearing more about the AI Native development space. Questions asked: (00:00) Introduction (02:36) What is AI Generated Code? (03:45) Should we trust AI Generated Code? (14:34) The current usage of AI in Code Generated (18:27) Securing AI Generated Code (23:44) Reality of Security AI Generated Code Today (30:22) The evolution of Security Testing (37:36) Where to start with AI Security today? (50:18) Evolution of the broader cybersecurity industry with AI (54:03) The Positives of AI for Cybersecurity (01:00:48) The startup Landscape around AI (01:03:16) The future of AppSec (01:05:53) The future of security with AI
    --------  
    1:10:56

More Technology podcasts

About AI CyberSecurity Podcast

Podcast website

Listen to AI CyberSecurity Podcast, FT Tech Tonic and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Radio
Social
v6.28.0 | © 2007-2024 radio.de GmbH
Generated: 11/20/2024 - 6:16:02 PM