Musks X commits to UK regulator on hate speech with Grok probe still open


X has agreed to a set of commitments on illegal hate speech and terrorist content with Ofcom, Britain's communications regulator said on Friday, after months of pressure that escalated through the autumn and winter.

Under the deal, Elon Musk's platform will review suspected illegal hate and terrorism posts within 24 hours on average, will assess at least 85% within 48 hours, and will submit quarterly performance data to the regulator over the next year.

The platform has also promised to restrict UK access to accounts operated by or on behalf of organisations proscribed under British terrorism law, and to engage external experts to overhaul a reporting flow that civil-society groups have repeatedly described as opaque.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The wording matters here, because flagged content not being clearly received or acted on has been the substance of most complaints filed against X with Ofcom over the past year.

Suzanne Cater, Ofcom's online safety enforcement director, said in a statement that ‘terrorist content and illegal hate speech is persisting on some of the largest social media sites', and that the gap had become ‘of particular importance in the UK following a number of recent hate-motivated crimes suffered by the country's Jewish community'.

Imran Ahmed of the Center for Countering Digital Hate said the commitments followed ‘sustained campaigning' after last year's attack on Heaton Park Synagogue near Manchester.

Britain has had a difficult run of incidents to absorb. The Heaton Park attack was followed by a fatal incident in north London last month that police are treating as terrorism, and CCDH's own monitoring after the Golders Green attack documented what it described as a flood of antisemitic posts on X (the underlying CCDH dataset is here).

The new commitments do not address those incidents directly. They set the procedural floor underneath them.

The reception was mixed. Danny Stone, chief executive of the Antisemitism Policy Trust, described the package as ‘a good start' but said X was still ‘failing in so many regards' to tackle racism.

Ofcom itself was careful to note that its formal investigation into X, including the company's systems for handling illegal content and questions raised by its Grok AI assistant, remains open. Friday's agreement is a negotiated commitment, not a settlement.

There is a separate Grok track running in parallel. Ofcom is examining how X handles AI-generated sexualised imagery created with the chatbot, and earlier this month X limited Grok's image-editing features to paid users after a deepfake controversy and UK ban threat. The Friday commitments do not resolve that thread. They sit alongside it.

The wider context is familiar to anyone following the platform's regulatory pipeline. The European Commission has an open proceeding into whether X is failing to curb hate speech, and the company is the largest single source of disinformation on the Commission's own monitoring. Australian and Singaporean regulators have pressed on adjacent issues. The UK pact lands inside a queue rather than at the end of one.

Substantively, the new commitments are the operational expression of the Online Safety Act framework that became law in 2023, with the largest platforms now required to take down illegal content quickly or face fines of up to 10% of global turnover.

The 24-hour review pledge is the kind of measurable metric the regulator has wanted on paper. The 85%-within-48-hours backstop reads like a number worked out so Ofcom can audit it.

The quarterly data, delivered over the next year, will be the first granular dataset the regulator has on whether platform-side commitments actually move illegal-content removal in the direction the law intended.