The Labour Government’s strong view is that AI can transform the lives of Britons for the better. AI can help foster growth; boost productivity; make services and utilities more efficient; unlock cheap energy; and propel medical advances. This is why we are investing in AI infrastructure all across the UK and encouraging adoption across the whole economy. We need AI we can leverage as tools to empower British citizens. But there is a problem – this is not what leading AI companies are aiming for.
Instead, they are rushing to create ever more competent and autonomous AIs that can outcompete humans at most cognitive and economically valuable tasks. This is their explicit end goal – superintelligence, or smarter-than-human AI.
READ MORE: ‘The missing ‘B’ – the case for a national smartphone ban in schools
Not only are these smarter-than-human AIs not what we want, but they also pose enormous risks for British citizens. This is because even AI developers do not know how to control AIs that are smarter than them. In a recent blog post, OpenAI wrote that “Although the potential upsides are enormous, we treat the risks of superintelligent systems as potentially catastrophic […]. Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work.” Indeed, experts, including the very CEOs of the leading AI companies, warn that advanced AIs pose a risk of human extinction on par with “pandemics and nuclear war.”
Governments have the right and responsibility to protect their citizens from such risks. But what can be done? I recently joined AI experts, policymakers, and public figures in signing an open letter proposing a solution: an international prohibition on the development of superintelligence for the foreseeable future. This would vastly reduce the gravest risks from advanced AI, and redirect the productive energy of the tech sector towards genuinely valuable AI development and applications.
In the face of this pressing challenge, we need principled leaders who will show others the way. The UK is uniquely well placed to lead in negotiating an international agreement on prohibiting the development of superintelligence. Notably, we established the world’s first AI Security Institute, which conducts cutting edge research on AI risks, how to measure them, and what to do about them. It has inspired the creation of institutes modelled after it across the world, including the U.S. Center for AI Standards and Innovation and the Chinese AI Safety Network.
This is why our Manifesto promised to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”.
Unfortunately, because of competing priorities, this issue has not moved as quickly as it needs to. More than a year later, the Government has yet to introduce a Bill in Parliament regulating advanced AIs.
Subscribe here to our daily newsletter roundup of Labour news, analysis and comment– and follow us on Bluesky, WhatsApp, X and Facebook.
The UK deserves growth. But it deserves the right kind of growth. And we cannot safely leverage AI for growth without regulation on the most advanced systems. So it is time for us to deliver on the promise we made to the British public, and propose regulation that redirects AI companies away from developing uncontrollable superintelligence.
Concretely, this AI Bill can achieve its aim by making the pursuit of superintelligence illegal, and by creating a regulatory regime to monitor development and enforce this prohibition. The AI Security Institute is well positioned to become the regulator of choice, leveraging its wealth of technical expertise. And by targeting specifically the most powerful AI systems, the AI Bill could let specialised and tool-like AI development thrive, which is exactly what we need.
Share your thoughts. Contribute on this story or tell your own by writing to our Editor. The best letters every week will be published on the site. Find out how to get your letter published.
I am not alone in believing this matters. A coalition of more than 100 cross-party parliamentarians joins me in supporting a statement by the UK non-profit ControlAI that “The UK can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.”
Whatever shape it takes, AI regulation is not just one more item on the agenda; it is what we owe British citizens now and to safeguard future generations.
-
- SHARE: If you have anything to share that we should be looking into or publishing about this story – or any other topic involving Labour– contact us (strictly anonymously if you wish) at [email protected].
- SUBSCRIBE: Sign up to LabourList’s morning email here for the best briefing on everything Labour, every weekday morning.
- DONATE: If you value our work, please chip in a few pounds a week and become one of our supporters, helping sustain and expand our coverage.
- PARTNER: If you or your organisation might be interested in partnering with us on sponsored events or projects, email [email protected].
- ADVERTISE: If your organisation would like to advertise or run sponsored pieces on LabourList‘s daily newsletter or website, contact our exclusive ad partners Total Politics at [email protected].


More from LabourList
Labour’s identity crisis – caught between Blairism and Blue Labour
Letters to the Editor – week ending 14 December 2025
‘Like changes to winter fuel allowance, lifting the two-child cap may be easy in Parliament — and risky everywhere else’