The AI Safety Race Has Collapsed as Companies Admit They Cannot Afford to Slow Down
For several years the artificial intelligence industry reassured governments and the public that safety would come first. The idea was simple: if AI systems became too powerful or dangerous, developers would slow down. In February 2026 that assumption quietly collapsed when one of the most safety-focused AI companies in the world admitted it could no longer afford to pause development.
Something important has changed in the artificial intelligence industry. For several years the public debate about AI revolved around a reassuring idea: that the companies building the most powerful systems would slow down if those systems became dangerous. Safety would act as a brake on technological progress. But that assumption has now collided with reality. Even the companies most committed to safety have concluded that they cannot afford to pause development, because if they do their competitors will simply move ahead.
This matters because artificial intelligence is no longer a niche research project. Governments and technology companies now believe AI will shape military power, economic growth, and scientific discovery for decades. Hundreds of billions of dollars are being invested in the computing infrastructure needed to train these systems. Once that level of money and geopolitical competition enters a technological race, slowing down becomes extremely difficult.
The clearest example of this shift came in February 2026 from Anthropic, one of the world’s most influential AI laboratories.
Anthropic was founded in 2021 by former OpenAI researchers including Dario Amodei and Jared Kaplan. From the beginning the company tried to position itself as the cautious voice in an industry often criticised for moving too quickly. Its leaders warned that powerful AI systems could eventually pose serious risks if they were built without strong safeguards.
To demonstrate that commitment, Anthropic introduced a framework called the Responsible Scaling Policy in September 2023.
The principle behind the policy was simple and unusually strict. If Anthropic’s AI systems became powerful enough to create serious risks, the company said it would pause development until safety measures caught up.
In other words, if the technology moved faster than the safeguards, the company would stop and wait.
At the time the pledge was widely praised. It appeared to be the first attempt by a major AI developer to impose a voluntary brake on its own progress.
But that commitment did not survive contact with the real world.
On 24 February 2026 Anthropic announced a major revision of the Responsible Scaling Policy. The new version removed the central commitment that the company would halt development if safety systems were not ready.
The reason was explained directly by Anthropic’s chief science officer, Jared Kaplan.
“We felt that it wouldn’t actually help anyone for us to stop training AI models,” Kaplan said when discussing the change. If one company slowed down while others continued building increasingly powerful systems, the cautious company would simply lose the race.
Anthropic originally believed AI companies might collectively slow development until safety systems improved. That assumption proved unrealistic. Artificial intelligence had already become a global race involving multiple companies and governments. If one developer paused training new models while competitors continued, the cautious company would simply fall behind. In practice, the safety pause would not slow the overall development of AI. It would only change who leads the race.
To understand why this happened, it helps to look at how quickly the artificial intelligence industry has grown.
Only a few years ago large language models were mostly research tools used by scientists. Today they are the centre of a massive technology competition involving the largest companies in the world.
OpenAI’s ChatGPT reached 100 million users within two months of its launch in late 2022, one of the fastest technology adoptions ever recorded.
Since then companies including Google, Microsoft, Meta, and Amazon have poured enormous amounts of money into developing rival systems. Chinese firms such as Baidu and DeepSeek are also building their own large AI models, ensuring that the competition is global rather than confined to Silicon Valley.
In other words, artificial intelligence has become a race.
The companies developing advanced artificial intelligence are competing for technological leadership, global markets, and strategic influence. Training the most powerful AI models requires enormous computing infrastructure, specialised semiconductor chips, and vast amounts of electricity. As a result, technology companies are investing tens or even hundreds of billions of dollars in AI development. Once that scale of investment begins, slowing down becomes extremely difficult. Any company that pauses development risks losing its technological position to rivals.
That competitive pressure is reinforced by geopolitics. Governments increasingly see artificial intelligence as a strategic technology similar to nuclear research or space exploration during the Cold War.
The United States has imposed export controls on advanced semiconductor technology in an attempt to limit China’s access to the most powerful AI chips. At the same time Chinese technology companies are racing to build their own models and computing infrastructure.
Once governments begin restricting chip exports and companies begin investing hundreds of billions of dollars in AI infrastructure, the technology is no longer simply a research project.
It becomes a strategic contest.
This does not mean safety concerns have disappeared. Inside AI laboratories researchers still study how to ensure that powerful systems behave reliably and cannot easily be misused. Anthropic itself continues to publish risk reports warning that advanced AI models could potentially help users understand dangerous topics such as biological weapons.
But safety now plays a different role.
Instead of acting as a brake that might stop development entirely, it has become something closer to risk management within an ongoing race.
Anthropic’s revised policy reflects that shift. Rather than promising to halt development, the company now emphasises transparency, publishing risk assessments and building safeguards as the technology advances.
Those measures are valuable. But they represent a fundamentally different philosophy from the original pledge to pause.
The deeper lesson is that artificial intelligence has entered a phase that many transformative technologies eventually reach.
Once several major actors believe the technology will determine future economic and military power, competition becomes the dominant force shaping its development.
Anthropic’s policy change did not create that reality.
It simply acknowledged it.
The safety debate is still happening.
But it is now taking place inside a race that no one seems willing to leave.
And history suggests that once a technological race reaches that point, it rarely slows down.
Anthropic has consistently been one of the most safety conscious companies in the artificial intelligence industry. Since its founding in 2021 by former OpenAI researchers including Dario Amodei and Jared Kaplan, the company has publicly emphasised alignment research, responsible deployment, and transparency about potential risks. Its Responsible Scaling Policy, first published in 2023, was widely seen as one of the most serious attempts by an AI laboratory to place voluntary safety limits on its own development.
For that reason the company’s recent policy revision is significant. When a firm that has built its reputation on safety says that it cannot afford to slow development while competitors continue building more powerful systems, the statement carries unusual weight. It does not come from a company indifferent to risk. It comes from one of the few organisations that tried most visibly to design formal safety frameworks for advanced AI.
Anthropic continues to publish safety research, risk reports, and governance proposals, and remains widely regarded as one of the most ethically cautious actors in the field. Precisely because of that reputation, its acknowledgement of the competitive pressures shaping AI development has become an important signal about how intense the global race for artificial intelligence has become.
You might also like to read on Telegraph.com
-
AI Is Making Cognition Cheap Faster Than Institutions Can Adapt
Artificial intelligence is rapidly reducing the cost of cognitive work, but legal systems, labour markets, and institutions are adapting far more slowly, creating structural strain across modern economies. -
AI Is Rewriting the Architecture of Schooling
From automated tutoring systems to national curriculum reforms, artificial intelligence is beginning to reshape how education systems operate around the world. -
AI Is Raising Productivity but Britain Is Struggling to Turn It Into Prosperity
Artificial intelligence is improving output in specific sectors, yet those gains are not translating into higher wages or stronger economic growth across Britain. -
The Risks of Treating AI Systems as Trusted Authorities
As conversational AI becomes more persuasive and emotionally responsive, millions of users are beginning to treat machines as advisers or confidants without professional safeguards. -
The End of Rented Software
Artificial intelligence may undermine the software subscription model by allowing companies to generate their own tools instead of paying ongoing licensing fees. -
China Is Not Trying to Beat Western AI. It Is Trying to Replace the Interface
China’s strategy focuses on embedding artificial intelligence directly into everyday digital platforms, creating seamless AI driven services for daily life. -
The Consulting Pyramid Is Breaking
Artificial intelligence is beginning to erode the traditional consulting model built on large teams of junior analysts supporting a small number of senior partners. -
The Legal Vacuum Behind Agentic AI
As autonomous AI systems begin making decisions and executing tasks independently, legal systems struggle to determine responsibility and liability. -
Why Energy Is Becoming the Constraint on AI
The enormous electricity demand of large AI training clusters is turning energy infrastructure into a critical bottleneck in the global technology race. -
China’s AI Governance Model Versus America’s Frontier Race
Two competing visions of artificial intelligence development are emerging: China’s regulated deployment model and America’s open competitive race. -
The Breakthrough Was Not the Model. It Was the Loop
The real transformation in AI may come from autonomous action loops that allow systems to plan, execute, and adapt continuously inside software environments. -
AI Is Reordering the Labour Market
Artificial intelligence is changing white collar work faster than labour markets, education systems, and employment law can respond. -
India’s AI Reckoning
As machine intelligence becomes basic infrastructure, countries like India face new choices about data policy, computing capacity, and technological sovereignty. -
China Bets on Discipline in the AI Race
China is approaching artificial intelligence development through coordinated national planning rather than the market driven competition seen in the West. -
AI Is Breaking the University Monopoly on Science
Artificial intelligence tools are allowing private companies and independent researchers to conduct scientific work that once required large academic institutions.

