Elon Musk’s X Says It Will (Sort of) Crack Down on Grok’s Sexual Deepfake Problem

GettyImages 2255688657

After weeks of backlash around the world and multiple government investigations, Elon Musk’s social media platform X is taking additional steps to curb its sexual deepfake problem. But the changes don’t actually solve the problem completely and instead add new layers of limited restrictions rather than a platform-wide solution.

In a very confusing post on Wednesday evening,

First, the company said it had implemented new technical measures to prevent users from altering “images of real people in revealing clothing such as bikinis” specifically using the @grok account. X says the ban applies to all users, including those on premium plans.

X also reiterated that image creation and image editing through the @grok account is now limited to paid subscribers.

“This adds an additional layer of security by helping to ensure that individuals who attempt to misuse Grok accounts to violate the law or our policies can be held accountable,” the company said in the post.

X had previously announced plans to restrict the use of @grok to edit images to paid users, a move that was criticized by UK government officials. A Downing Street spokesperson said at the time that the change “simply turns an AI feature that allows illegal images to be created into a premium service.”

However, as The Verge previously reported, Grok’s image creation tools remain available for free when users access the chatbot through the standalone Grok website and app, as well as the Grok tab on the X app and website. Using a free account, Gizmodo was also able to access Grok’s image creation feature through the Grok tab on both the X website and mobile app. On Thursday, the dedicated site also gave us no trouble when asked to create an image of Elon Musk wearing a bikini and ready to take it off.

The biggest update is that This specific update appears to apply to both the @Grok account and the Grok tab on X.

It also comes as lawmakers in the UK are working to make such images illegal.

“We are committed to making X a safe platform for everyone and will continue to have zero tolerance for child sexual exploitation, non-consensual nudity and any form of unwanted sexual content,” the company said.

X and its parent company, XAI, did not immediately respond to Gizmodo’s request for comment.

The overall changes come following intense backlash and relative silence from the company over the recent proliferation of sexual deepfakes on the platform. Since late last month, some X users have used Grok to generate sexually explicit images from photos posted by other users without their consent, including images involving minors.

A social media and deepfake researcher found that Grok created about 6,700 sexually suggestive or nude images per hour over a 24-hour period in early January, Bloomberg reported.

Governments around the world are responding quickly. Malaysia and Indonesia blocked access to Grok, while UK and EU regulators began investigating possible violations of online security laws.

Ofcom, the UK’s online regulator, said it would continue its investigation despite the newly announced changes.

In the US, California Attorney General Rob Bonta announced on Wednesday that his office has launched its own investigation into the issue.

Meanwhile, as the Grok investigation intensified,

Left-leaning watchdog Media Matters, a frequent critic of Musk’s Ax, said it would leave the platform in response to the updated terms.



<a href

Leave a Comment