This week marks one whole year since the publication of the government’s AI Opportunities Action Plan. It arrived with the weight of expectation that now accompanies any serious conversation about artificial intelligence. Expectations around economic renewal, productivity gains and global competitiveness. It was a document intended to make one thing clear, that Britain would not simply absorb the AI advances of the US, but shape them.
A year on, the landscape looks louder, sharper, and more unsettled. The hype has not disappeared so much as fractured. New models arrive faster than the institutions built to understand them. Headlines oscillate between awe and alarm. If the first year of the Action Plan was about possibility, the next must be about credibility.
Those questions about credibility have been forced into the open by the recent developments around Grok, X’s AI chatbot. The decision by Ofcom to investigate X over the creation and spread of non-consensual sexual images created by Grok is not some footnote in the AI story that our country is writing, but a defining stress test of our country’s ability to protect people. Not just from one platform, but of the systems of responsibility surrounding AI deployment as a whole.
READ MORE: Could Musk’s social media platform go X-tinct in Britain?
The past week of abusive material generated by Grok makes clear this is no longer a theoretical debate about hypothetical misuse. It is about preventing real harm, protecting real victims, and reckoning with the consequences of releasing powerful tools without sufficient assurance, safeguards or professional oversight. When accountability is unclear, public trust collapses – and the damage spreads far beyond any single model or company.
It is in response to moments like this that a quieter, more consequential shift is now taking place. Attention is moving away from AI as a spectacle and towards the people responsible for its creation and maintenance. That change is not cosmetic, it is structural. And it will determine whether last year’s Action Plan delivers durable progress, or stalls under the weight of its own ambition.
The Plan was right to focus on foundations: compute, skills, adoption, and strategic use by the state. But embedding AI into public services isn’t a simple technical or procurement exercise, but a fundamental question of trust. And trust does not emerge from performance metrics alone. It emerges from confidence in the people designing, deploying and overseeing these systems – and from clarity about where responsibility sits when things go wrong.
Effective AI implementation cannot be something that simply happens to service users. If AI is experienced as something imposed – opaque, distant, unchallengeable – user resistance is inevitable. The Grok episode illustrates this plainly. Once trust is lost even legitimate uses of AI are viewed with suspicion. Rebuilding confidence is far harder than moving quickly in the first place.
The challenge for Government is balance. AI is a high-growth sector, and Britain cannot afford to smother it with blunt regulation. But nor can it afford a vacuum in which responsibility is diffuse and standards are implied rather than explicit. Innovation without legitimacy does not endure.
This is why the Roadmap to Trusted Third Party AI Assurance, published last September by the Department for Science, Innovation and Technology, matters more than its modest reception suggests. It is not a document of grand announcements. It is technical, procedural, and deliberately unglamorous. But it signals a shift from abstract governance to defined professional competence.
The roadmap begins to ask the necessary questions. What skills are required to assure AI systems? What certifications should matter? Who is qualified to independently verify that systems are safe, fair and functioning as intended? Over time, this will shape a clearer picture of what it means to be an AI professional in Britain – a role that extends beyond data science into ethics, risk management and the monitoring of emerging technologies.
Subscribe here to our daily newsletter roundup of Labour news, analysis and comment– and follow us on Bluesky, WhatsApp, X and Facebook.
This is how trust is made real. AI assurance will allow sound adoption without recklessness. It will enable organisations to move faster because risks are understood, not ignored. And it gives the workforce – the people building and deploying these systems – a clearer professional identity and a stronger voice in how AI is used.
Without this, the AI Opportunities Action Plan risks an all too familiar fate: ambition outpacing legitimacy. As AI moves from experimentation into everyday use, the defining question of the next phase will not be what models Britain builds, but who is trusted to build them. If the public sector can get this right, with proportionate standards and empowered workers, it can set the benchmark for the wider economy.
Share your thoughts. Contribute on this story or tell your own by writing to our Editor. The best letters every week will be published on the site. Find out how to get your letter published.
In the end, the success of the Action Plan will not be measured by novelty or speed alone. It will be measured by whether AI is woven into public life in a way that lasts. That depends less on machines, and more on the people who stand behind them, accountable and known.
-
- SHARE: If you have anything to share that we should be looking into or publishing about this story – or any other topic involving Labour– contact us (strictly anonymously if you wish) at [email protected].
- SUBSCRIBE: Sign up to LabourList’s morning email here for the best briefing on everything Labour, every weekday morning.
- DONATE: If you value our work, please chip in a few pounds a week and become one of our supporters, helping sustain and expand our coverage.
- PARTNER: If you or your organisation might be interested in partnering with us on sponsored events or projects, email [email protected].
- ADVERTISE: If your organisation would like to advertise or run sponsored pieces on LabourList‘s daily newsletter or website, contact our exclusive ad partners Total Politics at [email protected].


More from LabourList
Does the Mandelson scandal mark the endgame for Keir Starmer’s premiership?
Liverpool Metro Mayor calls for unity in fight against Reform in Gorton and Denton
‘If we don’t define our ends, populists will define them for us’