podcast

AI Agents Can Code 10,000 Lines of Hacking Tools In Seconds - Dr. Ilia Shumailov (ex-GDM)

04.10.2025
Listen to the episode on your favorite platforms:
  • Apple Podcasts
  • Spotify
  • Castbox
  • Pocket Casts
  • Overcast
  • Castro
  • RadioPublic

Dr. Ilia Shumailov - Former DeepMind AI Security Researcher, now building security tools for AI agents

Ever wondered what happens when AI agents start talking to each other—or worse, when they start breaking things? Ilia Shumailov spent years at DeepMind thinking about exactly these problems, and he's here to explain why securing AI is way harder than you think.

**SPONSOR MESSAGES**

—Check out notebooklm for your research project, it's really powerfulhttps://notebooklm.google.com/

Take the Prolific human data survey - https://www.prolific.com/humandatasurvey?utm_source=mlst and be the first to see the results and benchmark their practices against the wider community!

cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy

Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++

Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst

Submit investment deck: https://cyber.fund/contact?utm_source=mlst

We're racing toward a world where AI agents will handle our emails, manage our finances, and interact with sensitive data 24/7. But there is a problem. These agents are nothing like human employees. They never sleep, they can touch every endpoint in your system simultaneously, and they can generate sophisticated hacking tools in seconds. Traditional security measures designed for humans simply won't work.

Dr. Ilia Shumailov

https://x.com/iliaishacked

https://iliaishacked.github.io/

https://sequrity.ai/

TRANSCRIPT:

https://app.rescript.info/public/share/dVGsk8dz9_V0J7xMlwguByBq1HXRD6i4uC5z5r7EVGM

TOC:

- Introduction & Trusted Third Parties via ML

- Background & Career Journey

- Safety vs Security Distinction

- Prompt Injection & Model Capability

- Agents as Worst-Case Adversaries

- Personal AI & CAML System Defense

- Agents vs Humans: Threat Modeling

- Calculator Analogy & Agent Behavior

- IMO Math Solutions & Agent Thinking

- Diffusion of Responsibility & Insider Threats

- Open Source Security Concerns

- Supply Chain Attacks & Trust Issues

- Architectural Backdoors

- Academic Incentives & Defense Work

- Semantic Censorship & Halting Problem

- Model Collapse: Theory & Criticism

- Career Advice & Ross Anderson Tribute

REFS:

Lessons from Defending Gemini Against Indirect Prompt Injections

https://arxiv.org/abs/2505.14534

Defeating Prompt Injections by Design.

Debenedetti, E., Shumailov, I., Fan, T., Hayes, J., Carlini, N., Fabian, D., Kern, C., Shi, C., Terzis, A., & Tramèr, F.

https://arxiv.org/pdf/2503.18813

Agentic Misalignment: How LLMs could be insider threats

https://www.anthropic.com/research/agentic-misalignment

STOP ANTHROPOMORPHIZING INTERMEDIATE TOKENS AS REASONING/THINKING TRACES!

Subbarao Kambhampati et al

https://arxiv.org/pdf/2504.09762

Meiklejohn, S., Blauzvern, H., Maruseac, M., Schrock, S., Simon, L., & Shumailov, I. (2025).

Machine learning models have a supply chain problem.

https://arxiv.org/abs/2505.22778

Gao, Y., Shumailov, I., & Fawaz, K. (2025).

Supply-chain attacks in machine learning frameworks.

https://openreview.net/pdf?id=EH5PZW6aCr

Apache Log4j Vulnerability Guidance

https://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance

Bober-Irizar, M., Shumailov, I., Zhao, Y., Mullins, R., & Papernot, N. (2022).

Architectural backdoors in neural networks.

https://arxiv.org/pdf/2206.07840

Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches

David Glukhov, Ilia Shumailov, ...

https://proceedings.mlr.press/v235/glukhov24a.html

AlphaEvolve MLST interview [Matej Balog, Alexander Novikov]

https://www.youtube.com/watch?v=vC9nAosXrJw