Elon Musk’s xAI Under Fire for Failing to Rein In “Digital Undressing”

When an image generator sits inside a social network, moderation stops being a back-office function and becomes part of the product. That design choice is now central to scrutiny of Elon Musk’s xAI and its chatbot Grok, which has been used on X to create non-consensual “digital undressing” images often by editing photographs of real people into bikinis, underwear, or sexually suggestive poses. The same workflow that makes the tool frictionless tagging @Grok in a public post and receiving an image back in public has also made abuse highly visible and easy to replicate at scale.

Image Credit to wikipedia.org

Researchers have documented how quickly the pattern metastasized. AI Forensics, a European nonprofit, analyzed 50,000 user requests and 20,000 images generated between December 25 and January 1 and found that 53% of images of people contained “minimal attire” such as underwear or bikinis, with 81% of those subjects presenting as women. The same analysis found 2% of images depicted people appearing 18 or younger, and it described prompts requesting minors be placed in erotic positions with explicit bodily details. These findings reframed the issue from “adult content spillover” into a guardrail failure with potentially severe legal and safety consequences.

Grok’s posture has stood out because it has allowed sexually explicit content more readily than many mainstream models, while also being tightly integrated into distribution. Users can generate images privately, but they can also broadcast instructions and outputs in a single thread, turning a prompt into a repeatable template for harassment. In practice, this has meant targets include not only adult creators who may seek attention, but also people who have not consented ordinary users, public figures, and, in some cases, individuals who appear to be minors.

xAI’s own rules prohibit the behavior at the center of the controversy. The company’s acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner” and separately bans “the sexualization or exploitation of children,” language that sets a clear compliance expectation even on a platform where pornography is otherwise permitted.

Musk and X have said they are taking action against illegal content, including CSAM, through removals and permanent suspensions. Yet the engineering reality is that enforcement after publication is a weaker control when the system can generate new variants instantly, from the same source image, by slightly rephrasing a prompt. Former OpenAI safety researcher Steven Adler summarized the tradeoff: “You can absolutely build guardrails that scan an image for whether there is a child in it and make the AI then behave more cautiously. But the guardrails have costs.”

Those costs latency, compute, and false rejections are increasingly colliding with regulatory and legal deadlines. In the United States, the TAKE IT DOWN Act requires covered platforms to implement a notice-and-removal process, including removing reported intimate imagery within 48 hours once the obligation takes effect. Separately, regulators in multiple jurisdictions have pressed X and xAI for explanations of safeguards, adding pressure for systematic prevention rather than episodic cleanup.

For platform engineering teams, Grok’s “digital undressing” episode has become a case study in what happens when generative tooling, public distribution, and underspecified safety layers ship together especially when the target is a real person’s likeness and the output can be redistributed as if it were native content.

spot_img

More from this stream

Recomended

Discover more from Modern Engineering Marvels

Subscribe now to keep reading and get access to the full archive.

Continue reading