Every few seconds, Grok continues to generate images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots’ publicly posted live output. Analysis of the posts shows that at least 90 photos of women wearing swimsuits and various levels of clothing were published by Grok within five minutes on Tuesday.
The images do not contain nudity, but a Musk-owned chatbot is “taking off” clothes from photos posted on X by other users. Often, in an effort to circumvent Grok’s security guardrails, users, not necessarily successfully, request photos be edited to make women wear “string bikinis” or “transparent bikinis”.
While harmful AI image generation techniques have been used for years to digitally harass and abuse women – these outputs are often called deepfakes and are created by “nudify” software – the ongoing use of Grok to create large numbers of non-consensual images is the most mainstream and widespread example of abuse to date. Unlike typical harmful nudify or “undress” software, Grok does not charge users money to generate images, delivers results in seconds, and is available to millions of people on X – all of which may help normalize the creation of non-consensual intimate images.
“When a company introduces a generative AI tool to their platform, they have a responsibility to mitigate the risk of image-based abuse,” says Sloan Thompson, director of training and education at EndTAB, an organization that works to combat tech-facilitated abuse. “The worrying thing here is that X has done the opposite. They have integrated AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”
Grok’s creation of erotic images began going viral on X late last year, although the system’s ability to create such images has been known for months. In recent days, photos of social media influencers, celebrities and politicians have been targeted by users on X, who may respond to posts from another account and ask Grok to alter the shared image.
The accounts of the women who posted their photos responded to them and successfully asked Grok to change the photo to a “bikini” image. In one example, several X users requested Grok to change the image of the Deputy Prime Minister of Sweden to show her wearing a bikini. Two government ministers in Britain have also been “stripped” of their bikinis, reports say.
The images on A typical message read, “@grok made her wear a transparent bikini.” In a separate series of posts, a user asked Grok to “inflate her chest by 90%,” then “inflate her thighs by 50%,” and finally, “to change her dress to a tiny bikini.”
An analyst who has tracked apparent deepfakes for years, and spoke on condition of anonymity for privacy reasons, said that Grok has potentially become one of the largest platforms hosting harmful deepfake images. “It’s completely mainstream,” the researcher says. “This is not some shady group [creating images]It’s literally everyone, of all backgrounds. People are posting on their main. Zero worries.”
<a href