
Jensen Huang built the hardware substrate on which the entire AI revolution runs. NVIDIA's GPUs, originally designed for video game graphics, turned out to be ideal for the parallel computations that deep learning requires. Huang recognized this opportunity early and pivoted NVIDIA to become the dominant supplier of AI training and inference hardware. CUDA, NVIDIA's parallel computing platform, became the de facto standard for AI development. By 2024, NVIDIA's market capitalization surpassed $3 trillion, reflecting the market's belief that compute infrastructure is the critical bottleneck for AI progress. Without Huang's bet on AI compute, the scaling laws that drive current progress would have hit a hardware wall years ago.
“Software is eating the world, but AI is going to eat software.”
2017
“Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries, and nations.”
2024