What does it actually look like to run security inside one of Europe's fastest-growing AI companies? In this episode, recorded live at the Munich Cybersecurity Conference (MCSC), Ashish Rajan sat down with Igor Andriushchenko Head of Security at Lovable, the AI-native platform that lets anyone build and ship full applications without writing a line of code.
Igor joined Lovable as employee #40. Six months later, the team had grown to 150+. Developers were running multi-agent workflows overnight, PMs were pushing pull requests, and the volume of code changes was hitting numbers that challenged every traditional security process they had. This is the security story nobody talks about in AI-native scale-ups and Igor lived it.
In this episode, they cover: why your CI/CD pipeline is being load-tested to destruction by AI-generated churn · how to use PAM (Privileged Access Management) as a practical guardrail for AI agents that can't escalate to production secrets · why the allow-list vs deny-list logic is reversed for AI agents compared to traditional security · the overlooked SCA supply chain risk when AI recommends unmaintained or hallucinated packages · why old SAST tools are failing and what the new generation of agentic code scanners does differently · how to identify and manage advanced, intermediate, and basic AI users in your org without killing their productivity · and the practical "crawl, walk, run" approach to building internal AI security tooling that actually sticks.
Igor also shares how Lovable's security team built an incident response AI skill, uses reachability analysis agents to triage SCA findings for enterprise customers, and why the real investment isn't in the AI model, it's in the skills ecosystem and data connections underneath.
Questions asked:
(00:00) Introduction: Securing the AI Workforce(03:50) Who is Igor Andriushchenko? (Head of Security, Lovable) (06:10) The Churn of Change: Why AI Will Break Your CI/CD (10:40) The FOMO Problem: Don't Force AI Adoption (11:50) The "Air Pocket" Strategy for Safe AI Experimentation (14:00) The Context Paradox: More Access = Dumber AI (17:40) Managing Agent Sprawl and "Advanced" Users (19:40) Why You Must Treat AI Agents Like Human Developers (PAM Controls) (22:30) The Need for AI Telemetry & Visibility (27:50) Blurring Roles: When PMs Become Developers (31:30) Why You Must Use "Deny Lists" Instead of "Allow Lists" for AI (34:30) AI SAST vs. Traditional SAST: Finding Business Logic Flaws (39:40) Supply Chain Risks: When AI Recommends Dead Libraries (45:40) Building Custom AI Skills for Incident Response (52:50) Fun Questions: Battlefield, Team Culture, and Comfort Food