Anthropic, the artificial intelligence company behind the Claude models, temporarily suspended the account of prominent developer Peter Steinberger on Friday. The creator of the popular open-source agent framework OpenClaw posted a screenshot on social media platform X showing a notice that his account had been banned due to "suspicious" activity.
The suspension occurred shortly after Anthropic announced a significant pricing change affecting OpenClaw users. Last week, the company stated that subscriptions to its Claude service would no longer cover usage through "third-party harnesses including OpenClaw," requiring separate API payments based on consumption.
Rapid Reversal After Public Outcry
Steinberger's account was reinstated within hours after his post detailing the ban went viral, accumulating hundreds of comments. An Anthropic engineer publicly responded in the thread, stating the company had "never banned anyone for using OpenClaw" and offering assistance. It remains unclear if this intervention directly led to the account restoration.
Steinberger, who is now employed by Anthropic's rival OpenAI, suggested a connection between the pricing policy change and the suspension. "Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source," he posted, possibly referencing features like Claude Dispatch added to Anthropic's own agent, Cowork.
Clash Over 'Claw Tax' and Compute Costs
Anthropic defended its new pricing structure, explaining that standard subscriptions were not designed to handle the "usage patterns" of tools like OpenClaw. The company stated that such agent frameworks can be more computationally intensive, often running continuous reasoning loops and integrating with multiple third-party tools.
Steinberger contested this rationale, asserting he was complying with the new API payment rules when his access was cut. The incident highlights growing tensions between AI platform providers and the developers of independent tools that extend their models' capabilities.
Developer's Dual Role and Testing Rationale
When questioned why he used Claude instead of his employer OpenAI's models, Steinberger clarified his dual responsibilities. He stated his work at the non-profit OpenClaw Foundation aims to ensure compatibility with "*any* model provider," while his role at OpenAI involves future product strategy.
He explained his continued testing with Claude was necessary because it remains a popular choice among OpenClaw users, and he needs to ensure updates do not break functionality for that user base. His response of "Working on that" to comments about users preferring Claude over ChatGPT hinted at his strategic work at OpenAI.
Broader Implications for AI Ecosystem
The brief ban and subsequent reinstatement underscore the complex power dynamics in the rapidly evolving AI industry. As companies like Anthropic develop their own agent ecosystems, such as Cowork, they face decisions about how to manage third-party tools that both extend and potentially compete with their native offerings.
Anthropic has not provided additional public comment on the specific reasons for the initial suspension. The company's new API-based pricing for OpenClaw usage, which some users have termed a "claw tax," is now in effect, marking a significant shift in how developers will pay for access through such frameworks.