This article was originally published on Fool.com. All figures quoted in US dollars unless otherwise stated.
This article was originally published on Fool.com. All figures quoted in US dollars unless otherwise stated.
Larry Ellison is the chairman of Oracle (NYSE: ORCL), which is currently building some of the fastest and most cost-efficient data centres in the world for developing artificial intelligence (AI).
Elon Musk, on the other hand, runs Tesla (NASDAQ: TSLA), which is building AI-powered self-driving software for its electric vehicles. He also runs SpaceX, X (formerly Twitter), and a new AI start-up called xAI.
Ellison and Musk need tens of thousands of graphics processors (GPUs) for their data centres in order to bring AI to life, and Nvidia (NASDAQ: NVDA) supplies the best chips in the industry.
At Oracle's financial analyst meeting on 12 September, Ellison told the audience that he and Musk recently went to dinner with Nvidia CEO Jensen Huang at the Nobu restaurant in Palo Alto. The two, who are among the richest people on Earth, found themselves begging Huang for something money simply can't buy at the moment. Here's how it went down.
The arms race for GPUs
Oracle currently has 162 data centres either live or under construction, but it believes that number could eventually top 2,000 because the demand for computing power from AI developers is soaring. Some of Oracle's largest data centres feature clusters of more than 32,000 GPUs, but next year, the company will offer a cluster of 131,072 GPUs from Nvidia's latest Blackwell lineup.
Oracle designed unique RDMA (random direct memory access) networking technology that can move data from one point to another more quickly than traditional Ethernet networks, and since developers pay for computing power by the minute, this can significantly reduce costs. That's why leading AI start-ups like OpenAI, Cohere, and even Musk's xAI are using Oracle's infrastructure.
In its recent fiscal 2025 first quarter (ended July 31), the Oracle Cloud Infrastructure (OCI) segment generated $2.2 billion in revenue, a whopping 45% jump from the year-ago period. However, it could be growing even faster if not for supply constraints -- in other words, Oracle simply can't get its hands on enough GPUs for its data centres.
Not only is Oracle battling other cloud giants like Microsoft, Amazon, and Alphabet for GPU allocations from Nvidia, but tech companies like Tesla and Meta Platforms are also soaking up supply to develop AI for their own purposes. Tesla is trying to bring a cluster of 50,000 GPUs online this year to enhance its self-driving software, which requires a substantial amount of computing power.
Meta, on the other hand, used around 16,000 of Nvidia's flagship H100 GPUs to train its Llama 3.1 large language model (LLM), but the company plans to increase its capacity to a mind-boggling 600,000 H100 equivalents by the end of this year. That will pave the way for Llama 4, which CEO Mark Zuckerberg says could set the benchmark for the industry in 2025.
Ellison and Musk are begging for more GPUs
Ellison's and Musk's comments to Jensen Huang over dinner, according to Ellison:
Please take our money ... take more of it. You're not taking enough. ... We need you to take more of our money. Please.
Ellison and Musk were practically begging Huang for more GPUs, but no amount of money in the world can buy the numbers they require right now because Nvidia simply can't keep up with demand. Oracle and Tesla aren't even Nvidia's biggest customers!
Oracle spent $6.9 billion on capital expenditures (capex) during fiscal 2024 (which ended 30 April) and expects to spend double that in fiscal 2025. Most of the money will go toward buying chips and building data centres. Tesla plans to spend more than $10 billion on capex this calendar year, which will also go toward the 50,000 GPU cluster I mentioned earlier.
Those numbers are modest compared to what other tech giants are spending. Microsoft allocated $55.7 billion to capex during its fiscal 2024 (ended June 30), and it plans to spend even more in fiscal 2025. Amazon's capex spending, on the other hand, could top $60 billion in calendar 2024.
Therefore, it's no surprise that Nvidia generated $26.3 billion in data centre revenue during its recent fiscal 2025 second quarter (ended July 28), a 154% increase from the year-ago period.
Ellison says the wave of AI spending could continue for the next 10 years as companies and nation-states battle for tech supremacy when it comes to AI, so Nvidia's data centre revenue probably has plenty of growth left in the tank.
This article was originally published on Fool.com. All figures quoted in US dollars unless otherwise stated.
This article was originally published on Fool.com. All figures quoted in US dollars unless otherwise stated.