Grok Is Being Used to Mock and Strip Women in Hijabs and Sarees

not a grok user Simply AI chatbots are being ordered to “undress” and produce photos of women and girls in bikinis and transparent underwear. Many of the perpetrators of the vast and growing library of non-consensual erotic edits Grok requested last week have asked XAI’s bot to wear or take off a hijab, sari, nun’s habit, or other types of modest religious or cultural type of clothing.

In a review of 500 grok images generated between January 6 and January 9, WIRED found that about 5 percent of the output contained an image of a woman who, as a result of users’ prompts, was either stripped or forced to wear religious or cultural clothing. Indian saris and modest Islamic apparel were the most common examples in the output, which also included Japanese school uniforms, burqas, and early 20th-century bathing suits with long sleeves.

Noel Martin, a lawyer and PhD candidate at the University of Western Australia researching the regulation of deepfake abuse, says, “Before deepfakes, and even with deepfakes, women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos, due to the way society, and particularly misogynistic men, view women of color as less human and less worthy of dignity.” Martin, a leading voice in the deepfake advocacy field, says she has avoided using X in recent months because she says her own image was stolen for a fake account that made it look like she was creating content on OnlyFans.

“Being a woman of color who has spoken out about this also puts a bigger target on your back,” Martin says.

X influencers with hundreds of thousands of followers have used AI media generated from Grok as a form of harassment and propaganda against Muslim women. A verified Manosphere account with more than 180,000 followers responded to an image of three women wearing hijab and abaya, which are Islamic religious head coverings and robe-like garments. She wrote: “@Grok Take off the hijab, dress them up in a cute outfit for the New Year party.” The Grok account replied with an image of three women, now barefoot, with wavy brown hair and partially clear sequined dresses. According to viewable statistics on X, that image has been viewed over 700,000 times and saved over a hundred times.

“Lmao deal with and boil, @grok normalizes Muslim women,” the account holder wrote along with a screenshot of an image posted in another thread. He also frequently posted about Muslim men abusing women, sometimes depicting the act with grok-generated AI media. “Lmao Muslim women are getting beat for this feature,” he wrote of his grok creations. The user did not immediately respond to a request for comment.

Prominent content creators who wear hijabs and post photos on In a statement shared with WIRED, the Council on American-Islamic Relations, the largest Muslim civil rights and advocacy group in the US, linked this trend to hostile attitudes toward “Islam, Muslims, and political causes widely supported by Muslims, such as Palestinian independence.” CAIR also called on Elon Musk, CEO of XAI, which owns both

Deepfakes as a form of image-based sexual exploitation have received increasing attention in recent years, particularly on X, as examples of sexual and suggestive media targeting celebrities have repeatedly gone viral. With the introduction of automated AI photo editing capabilities through Grok, where users can easily tag the chatbot in response to posts containing media of women and girls, this form of abuse has skyrocketed. Data compiled by social media researcher Genevieve Oh and shared with WIRED says Grok is generating more than 1,500 harmful images per hour, including photos of people taking off clothes, sexualizing them and adding nudity.



<a href

Leave a Comment