Technology

As competition intensifies, Anthropic updates its AI safety pledge. What is stated in the new policy?

As competition intensifies, Anthropic updates its AI safety pledge. What is stated in the new policy?

By Kajal Sharma - 25 Feb 2026 05:53 PM

Anthropic has updated its safety guidelines to better reflect the current global regulatory framework that places a higher priority on the development and competitiveness of AI. The Claude maker stated in a revised version of its Responsible Scaling Policy (RSP), a voluntary framework that Anthropic uses to address catastrophic risks from AI systems, that it would not cease developing an AI model that was deemed dangerous if a competitor had already released a model that was comparable or better.This is a change from its RSP from two years ago, which said Anthropic would postpone potentially hazardous AI development. Anthropic stated in a blog post on Tuesday, February 24 that the change in its safety policy was brought about by the rapid advancement of AI and the absence of government agreement on AI rules.

Given that Anthropic has been repeatedly referred to as one of the most safety-conscious companies in the AI industry, the revised policy represents a significant change. But the AI startup has also faced fierce competition from rivals like Google, OpenAI, and Elon Musk's xAI, which frequently release state-of-the-art tools."We anticipated that the announcement of our RSP would inspire other AI firms to implement comparable regulations." We anticipated that RSPs or comparable regulations would eventually become voluntary industry standards or influence AI legislation meant to promote safety and openness in AI model development, according to Anthropic. It further stated that "some parts of this theory of change have played out as we hoped, but others have not," based on its evaluation of the earlier RSPs.

 

Newsletter

Subscribe our newsletter to stay updated every moment