Michael Gerstenhaber, a Product Vice President at Google Cloud, has identified three distinct frontiers that artificial intelligence models must advance upon to achieve widespread, practical enterprise use. In an interview with TechCrunch, Gerstenhaber, who leads the company's Vertex AI platform, argued that the race is no longer just about raw capability, but also about response time and scalable cost.

Gerstenhaber, who has worked in AI for two years following roles at Anthropic and now Google, stated that these three boundaries define how and where different AI models will be deployed. His insights come from overseeing a platform used by developers and major corporations like Shopify and Thomson Reuters to build agentic AI applications.

The Three Frontiers of AI Capability

The first frontier is raw intelligence, exemplified by models like Gemini Pro tuned for complex tasks such as code generation. "You just want the best code you can get, doesn’t matter if it takes 45 minutes," Gerstenhaber explained, highlighting use cases where quality is paramount over speed.

The second is latency, crucial for real-time applications like customer service. Here, the goal is the "most intelligent product within that latency budget," as exceeding a user's patience threshold renders superior intelligence irrelevant.

The third and often overlooked frontier is cost at scale. For companies like Reddit or Meta needing to moderate vast, unpredictable volumes of content, a model must be both intelligent and cheap enough to deploy "to an infinite number of subjects." This makes cost a primary constraint for massive-scale operations.

Vertical Integration as a Key Advantage

Gerstenhaber cited Google's unique vertical integration as a decisive strength, from building its own data centres and chips (TPUs) to developing foundational models, inference layers, and end-user applications like Gemini Enterprise. This control over the entire stack, he argued, provides a significant advantage in navigating these three frontiers simultaneously.

The Slow March to Agentic AI

Despite the potential, Gerstenhaber acknowledged that agentic AI systems—AI that can autonomously perform multi-step tasks—are taking time to catch on commercially. He attributed this delay to a lack of essential infrastructure, such as robust patterns for auditing agent actions and authorising data access.

"This technology is basically two years old, and there’s still a lot of missing infrastructure... Production is always a trailing indicator of what the technology is capable of," he stated. He noted that software engineering has seen faster adoption due to existing safe development environments and human-in-the-loop review processes, patterns that need replication in other fields.

Gerstenhaber's analysis suggests that the future of AI will not be dominated by a single "smartest" model, but by a suite of specialised solutions, each optimised for a specific balance of intelligence, speed, and economics. The race is on to build the infrastructure that will allow these agentic systems to move from impressive demos into safe, governed, and scalable production.