Nvidia Corporation has begun shipping early samples of its next-generation Vera Rubin artificial intelligence chips to customers, the company revealed during its fiscal fourth-quarter earnings call on Wednesday. The announcement came as the Santa Clara-based chipmaker, now the world's most valuable company by market capitalisation, posted another set of results that surpassed Wall Street expectations, demonstrating sustained momentum in the global AI boom.
Chief Financial Officer Colette Kress stated the company delivered "our first Vera Rubin samples" earlier this week, with broader shipments expected to commence in the second half of 2026. "We expect every cloud model builder to deploy Vera Rubin," Kress said. CEO Jensen Huang had previously indicated at the Consumer Electronics Show in January that the Rubin platform offers more than triple the speed and five times faster inference performance compared to its current Blackwell systems.
Strategic Positioning at the Centre of AI
Throughout the earnings call, Huang repeatedly positioned Nvidia as the foundational platform for the industry's leading players. He confirmed the company is "close" to finalising a multibillion-dollar partnership with OpenAI, whose latest Codex model is trained on Nvidia's Blackwell systems. This follows a previously outlined 2025 infrastructure initiative potentially worth $100 billion.
Nvidia also announced an investment of up to $10 billion in AI safety and research company Anthropic, and highlighted that Meta Platforms Inc. is deploying its GPUs in its pursuit of artificial general intelligence. "We want to take the great opportunity that we have as we're in the beginning of this new computing era, this new computing platform shift, to put everybody on Nvidia," Huang stated, outlining a goal to support every form of AI, from large language models to robotics.
Architecture Expansion and Market Risks
Addressing the company's technology roadmap, Huang indicated a preference for maintaining a single, unified chip design but revealed plans to extend Nvidia's architecture by integrating technology from Groq, a company specialising in low-latency AI inference. More details are expected at Nvidia's GPU Technology Conference (GTC) in March. This follows a non-exclusive licensing agreement struck with Groq in late 2024, which also brought key Groq engineers to Nvidia.
Despite the bullish outlook, the company flagged significant risks in its latest annual 10-K filing with the U.S. Securities and Exchange Commission (SEC). It cited the availability of data centre capacity, energy, and capital for infrastructure build-out as potential constraints. "Expanding energy capacity to meet demand is a complex, multi-year process involving significant regulatory, technical, and construction challenges," the filing noted, warning that any shortage could impact future revenue.
Ecosystem Investments and Future Outlook
Huang defended Nvidia's growing portfolio of strategic investments in AI companies, including the deals with Anthropic and the pending OpenAI partnership, against questions about circular customer relationships. He framed the strategy as essential for strengthening the broader AI ecosystem and ensuring the next generation of technology is built on Nvidia's hardware and software platform.
The company's revenue forecast for the current quarter sailed past analyst estimates, signalling continued robust demand for its AI accelerators. The upbeat results arrive during a period of heightened scrutiny for AI-linked stocks, which have recently shown signs of market fatigue after a prolonged rally.