Jonathan Ross, Founder & CEO @ Groq: NVIDIA vs Groq - The Future of Training vs Inference
Posted by
Jonathan Ross is the Founder & CEO of Groq, the creator of the world’s first Language Processing Unit (LPUTM). Prior to Groq, Jonathan began what became Google’s Tensor Processing Unit (TPU) as a 20% project where he designed and implemented the core elements of the first-generation TPU chip. Jonathan next joined Google X’s Rapid Eval Team, the initial stage of the famed “Moonshots Factory”, where he devised and incubated new Bets (Units) for Google’s parent company, Alphabet.
In Today’s Episode We Discuss:
Scaling Laws and AI Model Training
Synthetic Data and Model Efficiency
Inference vs. Training Costs: Why NVIDIA Loses Inference
The Future of AI Inference: Efficiency and Cost
Chip Supply and Scaling Concerns
Energy Efficiency in AI Computation
Why Most Dollars Into Datacenters Will Be Lost
Meta, Google, and Microsoft's Data Center Investments
Distribution of Value in the AI Economy
Stages of Startup Success
The AI Investment Bubble
The Keynesian Beauty Contest in VC
NVIDIA's Role in the AI Ecosystem
China's AI Strategy and Global Implications
Europe's Potential in the AI Revolution
Future Predictions and AI's Impact on Society
Share this post