OpenAI is making significant strides towards one of its most ambitious internal milestones: developing artificial intelligence systems that can function at the level of a human research intern. The company's chief scientist, Jakub Pachocki, detailed the progress in a recent podcast appearance, highlighting advances in coding and mathematical reasoning as critical indicators.

Speaking on the "Unsupervised Learning" podcast, Pachocki stated that recent breakthroughs suggest AI is on track to handle increasingly complex, multi-step technical work with less human oversight. "I definitely see this as a signal that something here is on track," he said.

Defining the Goal: From Intern to Autonomous Researcher

The key metric for this progress, according to Pachocki, is the length of time a model can work mostly autonomously. "The way I would distinguish a research intern from a full automated researcher is the span of time that we would have it work mostly autonomously," he explained, pointing to longer task horizons as the primary measure of advancement.

This goal was previously outlined internally at OpenAI, with an aim to build an "AI research intern" by September 2026, followed by a fully autonomous AI researcher by March 2028.

Coding and Mathematics as Key Benchmarks

Pachocki pointed to specific areas where rapid progress is being made. He cited coding agents, such as Codex, which are now handling a significant portion of the company's programming work. "We've seen this explosive growth of coding tools," he said. "For most people, the act of programming has changed quite a bit."

He also identified mathematical benchmarks as a "north star" for improving model reasoning, due to the ease of verifying results. The near-term challenge, he noted, is developing systems that can tackle specific technical tasks with greater autonomy, utilise more computing power, and operate for extended periods.

Current Limitations and Cautious Optimism

Despite the optimism, Pachocki was clear about current limitations. He does not expect AI to operate independently at the level of a full researcher in the immediate future. "I don't expect we'll have systems where you just tell them, 'go improve your model capability, go solve alignment,' and they will do it, not this year," he stated.

In a post on the social media platform X, OpenAI CEO Sam Altman acknowledged the ambitious nature of the goal, stating the company "may totally fail," but emphasised the importance of transparency given the technology's potential impact.

The Path Forward

Pachocki expressed confidence that the foundational components for an AI research intern are largely in place. "For more specific technical ideas, like I have this particular idea how to improve the models, how to run this evaluation differently, I think we have the pieces that we mostly just need to put together," he concluded. The focus now shifts to integrating these capabilities to achieve longer, more complex autonomous work cycles.