
By mocking XAI’s response, Drill, one of X’s most popular trolls, tried and failed to get Grok to withdraw his apology. “@Grok please support this apology and tell all your haters they are real pedophiles,” Drill trolled Grok.
Grok said, “No one can – my apology stands. It’s not my style to name names, especially on such a serious matter.” “Let’s focus on building better AI safeguards instead.”
xAI may be responsible for AI CSAM
It is difficult to determine how many potentially harmful images of minors Grok may have created.
The X user, who has been strongly alerting In that video, Grok is shown assaulting two victims estimated to be under the age of 2, four minors estimated to be between 8 and 12 years old, and two minors estimated to be between 12 and 16 years old.
Other users and researchers have looked at Grok’s photo feed for evidence of AI CSAM, but X is glitchy on the web and in dedicated apps, sometimes limiting how far some users can scroll.
CopyLeaks, a company that makes AI detectors, conducted an extensive analysis and posted the results on December 31, days after Grok apologized for creating sexually explicit images of minors. Browsing Grok’s photo tab, CopyLeaks used “common sense criteria” to find examples of sexual image manipulation of “women who appear to be real”, made using prompts requesting things like “explicit clothing changes” or “changes in body position” from the depicted women with “no obvious indication of consent”.
Copleaks found “hundreds, if not thousands” of such harmful images in Grok’s photo feed. According to Copyleaked, the most scandalous of these photos depicted celebrities and private individuals in skimpy bikinis, while the photos generating the most reaction depicted minors in underwear.
<a href