OpenAI is being forced to decline commercial opportunities and make strategic product cuts due to a severe shortage of computing power, according to its Chief Financial Officer Sarah Friar. The constraint is acutely felt for 2026, as global demand for artificial intelligence continues to surge, forcing the company into difficult prioritisation decisions.

In an interview with ARK Invest CEO Cathie Wood, Friar stated the company is currently making "very tough trades" and not pursuing certain initiatives. "We're making some very tough trades at the moment and things we're not pursuing because we don't have enough compute," she said. OpenAI President Greg Brockman confirmed the pressure in a separate interview, highlighting the company's struggle to keep up with demand.

Strategic Prioritisation and Product Impact

The compute shortage is directly shaping OpenAI's product roadmap. The company is now prioritising a small number of core, revenue-generating use cases, including a personal AI assistant and tools for solving complex tasks. "We can't possibly get to all of them," Brockman said, referring to the breadth of potential AI applications.

This strategic shift has already led to the discontinuation of some initiatives, such as its video generation app Sora. Friar emphasised the direct link between compute capacity and revenue, stating plainly, "If you do not have it [compute], you do not have revenue. That is one thing I know for sure."

Scaling Challenges and Industry-Wide Issue

OpenAI, which serves approximately 900 million consumers and more than 1 million businesses, is raising significant capital to secure future capacity. This includes a recent $122 billion funding round. Despite this, scaling remains a fundamental challenge. "We cannot build compute fast enough to keep up with demand," Brockman said, describing the allocation of resources as involving "very painful decisions."

The company is making "multi-year commitments" to lock in future compute supply. This constraint is not unique to OpenAI; other leading AI firms like Anthropic have also implemented usage caps on their models during peak hours, indicating an industry-wide bottleneck.

Long-Term Implications

The situation underscores a critical limitation for the entire AI sector: even the most advanced software and models are ultimately constrained by physical hardware. For the foreseeable future, a company's ability to scale and innovate in AI will be intrinsically tied to its access to vast computing resources, forcing a new era of strategic trade-offs and prioritisation.