Advertisement

BoE weighs Anthropic risks for banks

Bank of England officials are set to discuss Anthropic’s powerful new AI model with banks, extending into the UK a regulatory debate that has already reached the top tier of US finance and technology policy. The move signals that supervisors are treating the issue not as a narrow technology story but as a live question for operational resilience, cyber defence and the wider stability of the financial system.

The urgency comes from Anthropic’s decision to hold back a broad release of Mythos, saying the model is capable of identifying and exploiting weaknesses across major operating systems and web browsers. Reuters reported that the company limited access to about 40 technology groups, while US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met senior bank executives in Washington on April 8 to warn them about the cyber risks posed by the system and similar tools. That meeting, and the planned Bank of England discussions reported by Bloomberg on April 11, show how quickly frontier AI has shifted from an innovation topic to a matter of prudential concern.

For banks, the concern is two-sided. A model with advanced vulnerability discovery could help institutions strengthen defences, speed patching and improve cyber monitoring. The same capabilities could also lower the barrier for hostile actors to discover flaws or build exploits at scale. That dual-use character is what makes Mythos different from the more familiar debates around generative AI in customer service, coding assistance or productivity tools. Financial groups have spent the past two years racing to adopt large language models for internal efficiency, fraud controls and research functions. Regulators are now being forced to examine whether the next generation of models can alter the cyber threat landscape itself.

The UK regulatory backdrop helps explain why the Bank is unlikely to view this as an isolated episode. A joint Bank of England and Financial Conduct Authority survey published in November 2024 found that 75% of firms were already using AI, with another 10% expecting to do so within three years. It also found a notable rise in dependence on third parties, with the largest providers accounting for a substantial share of cloud, model and data services. That concentration has long worried supervisors because it can create common points of weakness across the financial sector. A highly capable cyber-focused model introduced through a small set of dominant providers would sharpen those concerns rather than replace them.

Officials have also been preparing the ground more openly. In February 2026, the Bank’s summary of AI roundtables said regulated firms broadly backed the Prudential Regulation Authority’s existing framework, particularly Supervisory Statement 1/23 on model risk management, as a workable basis for responsible AI adoption. On April 1, Bank deputy governor Sarah Breeden and PRA chief executive Sam Woods wrote to the Chancellor and other ministers setting out plans for “safe AI innovation” during 2026. At the FCA, Sheldon Mills said in January that the watchdog was undertaking a long-term review of AI in retail financial services, while keeping an outcomes-based and technology-neutral stance rather than rushing into prescriptive new rules. Taken together, those signals suggest UK regulators want room for innovation, but not at the cost of resilience, market integrity or consumer protection.

That balancing act may prove difficult if Anthropic’s claims about Mythos hold up under broader scrutiny. The company has presented the model as unusually strong in offensive and defensive cyber tasks and has said it briefed senior US officials and industry stakeholders before release. Such pre-briefing is unusual in commercial AI launches and underlines how companies themselves are beginning to frame some models as systemically sensitive tools rather than standard software products. For central banks and prudential supervisors, that raises an awkward question: when does a private-sector model become important enough to require a more formal supervisory response across critical sectors such as banking, payments and market infrastructure?

There is also a competitive dimension. UK authorities have been under pressure to support growth and ensure regulation does not choke off investment in AI-enabled finance. Breeden and Woods said responsible adoption could unlock innovation and economic growth, while the FCA has emphasised flexibility and adaptation rather than rigid rule-making. Banks, for their part, do not want to fall behind peers in using powerful models for security testing, software development and business operations. Yet the faster firms adopt tools supplied by a small circle of model developers and cloud partners, the more supervisors are likely to ask whether new forms of dependency, correlated failure and cyber contagion are being built into the system.
Previous Post Next Post

Advertisement

Advertisement

نموذج الاتصال