Nvidia’s blockbuster quarter extends AI surge — Arabian Post

Nvidia has delivered another blockbuster quarter, posting revenue of $68.1 billion and giving a next-quarter forecast of $78 billion, a result that underlined how aggressively the world’s largest technology groups are still spending on artificial intelligence infrastructure. Data centre sales reached $62.3 billion, up 75 per cent from a year earlier, showing that demand for high-end chips, networking gear and full AI systems remains strong despite growing questions over costs, competition and power use.

The scale of the quarter matters beyond one company’s earnings. Nvidia now sits at the centre of a global build-out in AI computing, supplying the processors and systems that underpin large language models, cloud AI services and the growing shift from experimental tools to commercial deployment. Its latest numbers suggest that the spending boom has not peaked. Instead, buyers are moving from older Hopper-based systems to Blackwell platforms, while preparing for the next product cycle built around Rubin. Nvidia has described Rubin as the next generation of its AI supercomputing platform, with full production under way in 2026.

That helps explain why the company’s outlook landed so forcefully on Wall Street. Analysts had been looking for guidance closer to $72.6 billion, meaning the $78 billion forecast comfortably cleared expectations and signalled that order visibility remains unusually strong for such a large semiconductor company. This is no longer simply a story about training large AI models. Demand is increasingly being shaped by inference, the stage where models are put to work in search, software, advertising, coding assistants, customer service and industrial systems. Nvidia’s own leadership has argued that computing demand is now accelerating across both training and inference rather than shifting from one to the other.

The backdrop is a capex cycle of historic size. Across the largest cloud and internet groups, planned spending on data centres, chips and related AI infrastructure is running at well above $600 billion for 2026 by several market estimates, with some projections moving closer to $650 billion and others, after fresh announcements, pointing still higher. One new sign of that intensity came this week with a cloud and AI partnership that envisages more than $100 billion of spending over a decade, alongside a separate expectation that one of the largest cloud groups alone could spend about $200 billion this year. Those figures help explain why Nvidia’s order book has remained so resilient even as customers pursue a mix of Nvidia hardware and in-house silicon.

Blackwell is the main engine of the current wave. The architecture is already shipping commercially, and Nvidia has been framing it as the foundation for the next stage of accelerated computing. Rubin, unveiled as the successor platform, is meant to keep the company on an annual cadence of AI supercomputer upgrades. That cadence matters because hyperscalers and model developers are now treating computing capacity as a strategic moat. Faster replacement cycles mean stronger pricing power for suppliers, but they also raise the pressure on buyers to keep investing just to stay competitive.

Still, the quarter does not settle the debate over how durable the AI spending spree will be. Nvidia remains exposed to a small number of very large customers. Two clients accounted for 36 per cent of sales in the quarter, a reminder that revenue concentration is still high even as the company broadens its reach across enterprises, sovereign AI projects and industrial users. Competition is also intensifying. Meta has extended its custom chip partnership with Broadcom through 2029, while Google, Amazon and others continue building internal alternatives aimed at reducing dependence on Nvidia over time, especially for inference workloads where cost efficiency matters most.

Supply and infrastructure constraints are another reason for caution. Taiwan Semiconductor Manufacturing Co has raised its own outlook and capital spending plans to keep up with AI chip demand, a sign that the supply chain is still racing to add capacity. At the same time, the power needs of AI data centres are becoming a bigger issue for investors and policymakers. Estimates now suggest US data centres could require as much as 80 gigawatts of power by 2028, with a sizeable shortfall possible if grid expansion lags. That does not weaken Nvidia’s near-term position, but it does show that the AI boom will depend on electricity, cooling, land, permits and financing as much as on chips.

Read Previous

Mariah Carey Says She Doesn’t Care About Rock & Roll Hall of Fame Snub

Read Next

Trump: US will not lift blockade on Iranian ports without agreement

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular