Dive into the realities of AI-assisted coding, the origins of modern fine-tuning, and the cognitive science behind machine learning with fast.ai founder Jeremy Howard. In this episode, we unpack why AI might be turning software engineering into a slot machine and how to maintain true technical intuition in the age of large language models.
GTC is coming, the premier AI conference, great opportunity to learn about AI. NVIDIA and partners will showcase breakthroughs in physical AI, AI factories, agentic AI, and inference, exploring the next wave of AI innovation for developers and researchers. Register for virtual GTC for free, using my link and win NVIDIA DGX Spark (https://nvda.ws/4qQ0LMg)
Jeremy Howard is a renowned data scientist, researcher, entrepreneur, and educator. As the co-founder of fast.ai, former President of Kaggle, and the creator of ULMFiT, Jeremy has spent decades democratizing deep learning. His pioneering work laid the foundation for modern transfer learning and the pre-training and fine-tuning paradigm that powers today's language models.
Key Topics and Main Insights Discussed:
The Origins of ULMFiT and Fine-Tuning
The Vibe Coding Illusion and Software Engineering
Cognitive Science, Friction, and Learning
The Future of Developers
RESCRIPT: https://app.rescript.info/public/share/BhX5zP3b0m63srLOQDKBTFTooSzEMh_ARwmDG_h_izk
Jeremy Howard:
---
TIMESTAMPS (fixed):
Introduction & GTC Sponsor
ULMFiT & The Birth of Fine-Tuning
Intuition & The Mechanics of Learning
Abstraction Hierarchies & AI Creativity
Claude Code & The Interpolation Illusion
Coding vs. Software Engineering
Cosplaying Intelligence: Dennett vs. Searle
Automation, Radiology & Desirable Difficulty
Organizational Knowledge & The Slope
Vibe Coding as a Slot Machine
The Erosion of Control in Software
Interactive Programming & REPL Environments
The Notebook Debate & Exploratory Science
AI Existential Risk & Power Centralization
Current Risks, Privacy & Enfeeblement
---
REFERENCES:
Blog Post:
[] fast.ai Blog: Self-Supervised Learning
https://www.fast.ai/posts/2020-01-13-self_supervised.html
[] DeepMind Blog: Gemini Deep Think
[] Modular Blog: Claude C Compiler analysis
https://www.modular.com/blog/the-claude-c-compiler-what-it-reveals-about-the-future-of-software
[] Anthropic Engineering Blog: Building C Compiler
https://www.anthropic.com/engineering/building-c-compiler
[] Cursor Blog: Scaling Agents
https://cursor.com/blog/scaling-agents
[] fast.ai Blog: NB Dev Merged Driver
https://www.fast.ai/posts/2022-08-25-jupyter-git.html
[] Jeremy Howard: Response to AI Risk Letter
https://www.normaltech.ai/p/is-avoiding-extinction-from-ai-really
Book:
[] M. Chirimuuta: The Brain Abstracted
https://mitpress.mit.edu/9780262548045/the-brain-abstracted/
[] Daniel Dennett: Consciousness Explained
https://www.amazon.com/Consciousness-Explained-Daniel-C-Dennett/dp/0316180661
[] Cesar Hidalgo: Infinite Alphabet / Laws of Knowledge
https://www.amazon.com/Infinite-Alphabet-Laws-Knowledge/dp/0241655676
Archive Article:
[] MLST Archive: Why Creativity Cannot Be Interpolated
https://archive.mlst.ai/read/why-creativity-cannot-be-interpolated
Research Study:
[] METR Study: AI OS Development
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Paper:
[] Fred Brooks: No Silver Bullet
https://www.cs.unc.edu/techreports/86-020.pdf
[] John Searle: Minds, Brains, and Programs



