François Chollet has spent years asking a different question than most of the AI world. Instead of scaling what already works, he’s trying to understand what intelligence actually is—and how to build it from first principles. In this episode of Lightcone, he traces that path from his early work on deep learning to the creation of the ARC prize, and the launch of ARC V3, a new benchmark designed to measure something deeper than performance: the ability to learn, adapt, and reason efficiently in entirely new environments. He explains why today’s systems may be hitting limits, what recent breakthroughs really mean, and why reaching true general intelligence may require a fundamentally different approach. - AGI by 2030? - Introducing Ndea: A New Path Beyond Deep Learning - A New ML Paradigm - Replacing neural nets with compact symbolic programs - Why Ndea Isn’t Competing With Coding Agents - Why Everyone Might Be Wrong About Scaling LLMs - Why Coding Agents Suddenly Work So Well - The Limits of LLMs in Non-Verifiable Domains - What AGI Actually Means (And Why Most Definitions Are Wrong) - Why Deep Learning Hits a Wall - ARC’s Origin Story - ARC Benchmarks Explained: From V1 to V3 - The RL Loop Powering Coding Agents Today - ARC-AGI V3: Measuring “Agentic Intelligence” - Inside the ARC Game Studio - Could AGI Fit in 10,000 Lines of Code? - Building Ndea: From Idea to Compounding Research Stack - The Future of ARC: Benchmarks That Evolve With AI - Why There’s Still Huge Opportunity for New AI Paradigms - How to Build a Breakout Open Source Project - Lessons From Kera - Advice For How To Think About AIApply to Y Combinator: https://www.ycombinator.com/applyWork at a startup: https://www.ycombinator.com/jobs






