LLMs may have kicked off this AI boom, but the ceiling is closer than the hype suggests. As models run out of text data to train on, the companies and investors paying attention are already moving on. The next wave isn't better chatbots; it's machines that can understand the physical world. Luma AI, the Bay Area lab that raised over $1.4 billion from a16z, Nvidia, and Amazon, is betting on exactly that.
On episode of TechCrunch's Equity podcast, we’re bringing you a conversation Rebecca Bellan sat down with Amit Jain, co-founder and CEO of Luma AI, at Web Summit Qatar. Together, the pair dug into where the next trillion-dollar AI opportunity actually gets built, and whether the companies chasing it even know what they're building yet.
Listen to the full episode to hear about:
- Why video, audio, and images are the real frontier for AI training data, not text
- What an "intelligent world model" actually is, and why Jain thinks most companies building them are getting it completely wrong
- The case for why AI won't kill creative jobs, and why Jain thinks studio heads are the real problem
- How the path from video generation to robotics to AGI is simpler than anyone's making it sound
Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.
Chapters:
Intro
Why LLMs are hitting a ceiling
The data problem & what comes after LLMs
What actually makes a world model a world model
Why 3D data is a dead end
What Luma is building next
How much humans stay in the loop
Near-term use cases for agentic video
Will AI kill jobs in film & production?
Why the entertainment industry is already dying
Why we actually need more content, not less
Luma's roadmap: generation, understanding, and robotics
Outro
Learn more about your ad choices. Visit megaphone.fm/adchoices





