‘You can’t really prepare yourself to have your consent removed from you in this way,’ said Welsh TV presenter Jess Davies, thinking back to when she first learned about explicit images of her generated by an artificial intelligence algorithm that were being spread online. This experience, which has been reported by other women in recent months, has become emblematic of an ongoing controversy about what responsibilities exist for technological companies, as well as how quickly governments must move to address these issues.

The centre of this scandal is Grok, an artificial intelligence chatbot developed by X, a social media site owned by Elon Musk. While initially intended for purposes of generating text for conversations and image editing, its features have been used for making sexualized deep fakes of actual persons without their knowledge. Until recently, users numbering in hundreds of millions were able to access its image editing feature, which is believed to have allowed its widespread abuse.
Although it has always been illegal to share intimate, non-consensual images in the UK, it was not an offense to produce them using AI until this change in the law. The legislation to make such an offense illegal passed in June 2025, although it will not come into effect until this week. For people such as Davies, this has had dire consequences. We knew that the legislation had already passed and was ready to go, and it would have protected victims.
These new measures come under the Online Safety Act, a broad set of rules that impose a legal obligation on online platforms to conduct risk assessments, prevent the appearance of illegal content, and remove it quickly once it has been noticed. Ofcom, the online safety regulator in the UK, has begun an inquiry into X’s compliance with these obligations with respect to Grok, with the possibility of fines of up to £18 million or 10% of global turnover.
The international community is increasingly alarmed. In France, ministers have reported the issue to prosecutors, while in India, the IT ministry has asked X about its failure to prevent the dissemination of such obscene content produced by AI. Internet Watch Foundation has also pointed out disturbing reports of Grok being employed in creating sexually suggestive pictures of minors.
However, X has since limited Grok’s functionality to edit photos of real individuals wearing suggestive clothing to only those regions that do not make it illegal, including free and paying users alike. According to the company, by making it harder to use these tools, they will prevent their abuse. However, opponents feel that “charging for access to these tools is an insult to victims on top of an injury.”
For scholars like Cardiff University’s Dr Daisy Dixon, it’s about more than platform policy; it’s about the culture in which the platforms exist. Having spoken out about the impact of the deep fake attack she’d experienced herself, Dr Dixon went on to find herself subjected to targeted harassment threats and further non-consensual images in what she described thus: “It’s like someone’s hijacked your sense of self-understanding.”
International organizations have pointed out that tech-enabled abuse is only accelerating. According to UN Women, a “high estimate of 95% of online deepfakes” are “non-consensual pornography,” with women as the primary target. UN Women has called for “more robust legislation,” a “more diverse tech development workforce,” and “removal of harmful content” because “online abuse can easily translate to offline harm.” Technical approaches are also being considered. The Coalition for Content Provenance and Authenticity’s C2PA standardizes a way to include tamper-evident information about digital media’s provenance. Such standards may make it simpler to track down and authenticate media.
They cannot completely eradicate misuse but may assist platforms and authorities in tracking down manipulated media. Whether these new regulations in the UK will be enough to reduce the problem of AI tools such as Grok from being abused will be shown in the coming months. “When it comes down to it, [Platform owners] are responsible for the safety of users,” Davies said, with the hope of these regulations not only getting rid of bad material quicker but stopping it from happening in the first place.

