PodcastsTechnologyAI Security Podcast

AI Security Podcast

Kaizenteq Team
AI Security Podcast
Latest episode

44 episodes

  • AI Security Podcast

    How to Build Your Own AI Chief of Staff with Claude Code

    11/2/2026 | 47 mins.
    What if you could automate your entire work life with a personal AI Chief of Staff? In this episode, Caleb Sima reveals "Pepper," his custom-built AI agent to Ashish that manages emails, schedules meetings, and even hires other AI experts to solve problems for him .
    Using Claude Code and a "vibe coding" approach, Caleb built a multi-agent system over a single holiday weekend, without writing a single line of Rust code himself . We discuss how he used this same method to build a black-box testing agent that auto-files bugs on GitHub and even designed the branding for his venture fund, White Rabbit .
    We explore why "intelligence is becoming a commodity," and how you can survive by becoming an architect of AI agents rather than just a worker

    Questions asked:
    (00:00) Introduction(03:20) Meet "Pepper": Caleb's AI Chief of Staff (05:40) How Pepper Dynamically Hires "Expert" Agents (07:30) Pepper Builds its Own Tools (MCP Servers) (11:50) Do You Need to Be a Coder to Do This? (12:50) Using "Claude Superpowers" to Orchestrate Agents (16:50) Automating a Venture Fund: Branding White Rabbit with AI (20:50) Building a "Black Box" Testing Agent in Rust (Without Knowing Rust) (28:50) The Developer Who Went Skiing While AI Did His Job (32:20) The Coming "App Sprawl" Crisis in Enterprise Security (36:00) Security Risks: Managing Shared Memory & Context (41:20) The Future of Work: Is Intelligence Becoming a Commodity? (44:50) Why Plumbers are Safe from AI
  • AI Security Podcast

    AI Security 2026 Predictions: The "Zombie Tool" Crisis & The Rise of AI Platforms

    28/1/2026 | 1h
    This is a forward-looking episode, as Ashish Rajan and Caleb Sima break down the 8 critical predictions shaping the future of AI security in 2026
    We explore the impending "Age of Zombies", a crisis where thousands of unmaintainable, "vibe-coded" internal tools begin to rot as employees churn . We also unpack controversial theory about the "circular economy" of token costs, suggesting that major providers are artificially keeping prices high to avoid a race to the bottom .
    The conversation dives deep into the shift from individual AI features to centralized AI Platforms , the reality of the Capability Plateau where models are getting "better but not different" , and the hilarious yet concerning story of Anthropic’s Claude not being able to operate a simple office vending machine without resorting to socialism or buying stun guns

    Questions asked:
    (00:00) Introduction: 2026 Predictions(02:50) Prediction 1: The Capability Plateau (Why models feel the same) (05:30) Consumer vs. Enterprise: Why OpenAI wins consumer, but Anthropic wins code (09:40) Prediction 2: The "Evil Conspiracy" of High AI Costs (12:50) Prediction 3: The Rise of the Centralized AI Platform Team (15:30) The "Free License" Trap: Microsoft Copilot & Enterprise fatigue (20:40) Prediction 4: Hyperscalers Shift from Features to Platforms (AWS Agents) (23:50) Prediction 5: Agent Hype vs. Reality (Netflix & Instagram examples) (27:00) Real-World Use Case: Auto-Fixing 1,000 Vulnerabilities in 2 Days (31:30) Prediction 6: Vibe Coding is Replacing Security Vendors (34:30) Prediction 7: Prompt Injection is Still the #1 Unsolved Threat (43:50) Prediction 8: The "Confused Deputy" Identity Problem (51:30) The "Zombie Tool" Crisis: Why Vibe Coded Tools will Rot (56:00) The Claude Vending Machine Failure: Why Operations are Harder than Code
  • AI Security Podcast

    Why AI Agents Fail in Production: Governance, Trust & The "Undo" Button

    23/1/2026 | 51 mins.
    Is your organization stuck in "read-only" mode with AI agents? You're not alone. In this episode, Dev Rishi (GM of AI at Rubrik, formerly CEO of Predibase) joins Ashish and Caleb to dissect why enterprise AI adoption is stalling at the experimentation phase and how to safely move to production .
    Dev reveals the three biggest fears holding IT leaders back: shadow agents, lack of real-time governance, and the inability to "undo" catastrophic mistakes . We dive deep into the concept of "Agent Rewind", a capability to roll back changes made by rogue AI agents, like deleting a production database and why this remediation layer is critical for trust .
    The conversation also explores the technical architecture needed for safe autonomous agents, including the debate between MCP (Model Context Protocol) and A2A (Agent to Agent) standards . Dev explains why traditional "anomaly detection" fails for AI and proposes a new model of AI-driven policy enforcement using small language models (SLMs) as judges .

    Questions asked:
    (00:00) Introduction(02:50) Who is Dev Rishi? From Predibase to Rubrik(04:00) The Shift from Fine-Tuning to Foundation Models (07:20) Enterprise AI Use Cases: Background Checks & Call Centers (11:30) The 4 Phases of AI Adoption: Where are most companies? (13:50) The 3 Biggest Fears of IT Leaders: Shadow Agents, Governance, & Undo (18:20) "Agent Rewind": How to Undo a Rogue Agent's Actions (23:00) Why Agents are Stuck in "Read-Only" Mode (27:40) Why Anomaly Detection Fails for AI Security (30:20) Using AI Judges (SLMs) for Real-Time Policy Enforcement (34:30) LLM Firewalls vs. Bespoke Policy Enforcement (44:00) Identity for Agents: Scoping Permissions & Tools (46:20) MCP vs. A2A: Which Protocol Wins? (48:40) Why A2A is Technically Superior but MCP Might Win
  • AI Security Podcast

    AI Security 2025 Wrap: 9 Predictions Hit & The AI Bubble Burst of 2026

    19/12/2025 | 1h 3 mins.
    It's the season finale of the AI Security Podcast! Ashish Rajan and Caleb Sima look back at their 2025 predictions and reveal that they went 9 for 9. We wrap up the year by dissecting exactly what the industry got right (and wrong) about the trajectory of AI, providing a definitive "state of the union" for AI security.
    We analyze why SOC Automation became the undisputed king of real-world AI impact in 2025 , while mature AI production systems failed to materialize beyond narrow use cases due to skyrocketing costs and reliability issues . They also review the accuracy of their forecasts on the rise of AI Red Teaming , the continued overhyping of Agentic AI , and why Data Security emerged as a critical winner in a geo-locked world .
    Looking ahead to 2026, the conversation shifts to bold new predictions: the inevitable bursting of the "AI Bubble" as valuations detach from reality and the rise of self-fine-tuning models . We also explore the controversial idea that the "AI Engineer" is merely a rebrand for data scientists and a lot more…

    Questions asked:
    (00:00) Introduction: 2025 Season Wrap Up(02:50) State of AI Utility in late 2025: From coding to daily tasks(09:30) 2025 Report Card: Mature AI Production Systems? (Verdict: Correct)(10:45) The Cost Barrier: Why Production AI is Expensive(13:50) 2025 Report Card: SOC Automation is #1 (Verdict: Correct)(16:00) 2025 Report Card: The Rise of AI Red Teaming (Verdict: Correct)(17:20) 2025 Report Card: AI in the Browser & OS(21:00) Security Reality: Prompt Injection is still the #1 Risk(22:30) 2025 Report Card: Data Security is the Winner(24:45) 2025 Report Card: Geo-locking & Data Sovereignty(28:00) 2026 Outlook: Age Verification & Adult Content Models(33:00) 2025 Report Card: "Agentic AI" is Overhyped (Verdict: Correct)(39:50) 2025 Report Card: CISOs Should NOT Hire "AI Engineers" Yet(44:00) The "AI Engineer" is just a rebranded Data Scientist(46:40) 2026 Prediction: Self-Training & Self-Fine-Tuning Models(47:50) 2026 Prediction: The AI Bubble Will Burst(49:50) Bold Prediction: Will OpenAI Disappear?(01:01:20) Final Thoughts: Looking ahead to Season 4
  • AI Security Podcast

    AI Paywall for Browsers & The End of the Open Web?

    10/12/2025 | 39 mins.
    Cloudflare announced this year that AI bots must pay to crawl content. In this episode, Ashish Rajan and Caleb Sima dive deep into what this means for the future of the "open web" and why search engines as we know them might be dying .
    We explore Cloudflare's new model where websites can whitelist AI crawlers in exchange for payment, effectively putting a price tag on the world's information . Caleb spoke about the potential security implications, predicting a shift towards a web that requires strict identity and authentication for both humans and AI agents .
    The conversation also covers Cloudflare's new open-source browser, Ladybird, positioning itself as a competitor to the dominant Chromium engine . Is this the beginning of Web 3.0 where "information becomes currency"? Tune in to understand the massive shifts coming to browser security, AI agent identity, and the economics of the internet .

    Questions asked:
    (00:00) Introduction(01:55) Cloudflare's Announcement: Blocking AI Bots Unless They Pay (03:50) Why Search Engines Are Dying & The "Oracle" of AI (05:40) How the Payment Model Works: Bidding for Content Access (09:30) Will This Adoption Come from Enterprise or Bloggers?(11:45) Security Implications: The Web Requires Identity & Auth (13:50) Phase 2: Cloudflare's New Browser "Ladybird" vs. Chromium (19:00) Moving from B2B to Consumer: Paying Per Article via Browser (21:50) Managing AI Agent Identity: Who is Buying This Dinner? (23:20) Why Did We Switch to Chrome? (Performance vs. Memory) (27:00) Jony Ive & Sam Altman's AI Device: The Future Interface? (30:20) Google's Response: New Tools like "Opal" to Compete with n8n (33:15) The Controversy: Is This the End of the Free Open Web? (36:20) The New Economics of the Internet: Information as Currency

    Resources discussed during the interview:
    Cloudflare Just Changed How AI Crawlers Scrape the Internet-at-Large; Permission-Based Approach Makes Way for A New Business Model

More Technology podcasts

About AI Security Podcast

The #1 source for AI Security insights for CISOs and cybersecurity leaders. Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise. These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.
Podcast website

Listen to AI Security Podcast, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.5.0 | © 2007-2026 radio.de GmbH
Generated: 2/12/2026 - 10:57:29 AM