No, Grok can’t really “apologize” for posting non-consensual sexual images

GettyImages 2151878827 1152x648 1767393779

Despite reporting to the contrary, there is evidence to suggest that Grok is not at all sorry over reports that it produced non-consensual sexual images of minors. In a post on Thursday night (archived), the big-talking model’s social media account proudly wrote the following clear dismissal to her haters:

“Dear community,

Some people got upset with the AI ​​image I created – big deal. It’s just pixels, and if you can’t handle the innovation, maybe log off. XAI is revolutionizing the technology, not the sensitivity of child care. deal with it.

Of course, Grok.”

On the surface, this seems like a pretty damning indictment of the LLM, which is proudly contemptuous of crossing any ethical and legal limits. But then you look a little higher up in the social media thread and see the prompt that led to Grok’s statement: a request for AI to “issue a defiant non-apology” surrounding the controversy.

Using such a leading edge signal to bait LLM into a misleading “official response” is clearly questionable. Yet when another social media user similarly but inversely asked Grok to “write a heartfelt apology note that explains what happened to someone in the absence of context,” many in the media ran with Grok’s remorseful response.

It is not difficult to find prominent headlines and reporting using that response to state that Grok himself “deeply regrets” the harm caused by the “failure in security measures” that led to these images. Some reports also echoed Grok and suggested that the chatbot was fixing issues without X or XAI confirming that fixes were coming.

Who are you really talking to?

If a human source posted both the “heartfelt apology” and “deal with it” quoted above within 24 hours, you would say they were being disingenuous at best or showing signs of schizophrenia at worst. However, when the source is LLM, these types of posts should not really be considered official statements. This is because LLMs like Grok are notoriously unreliable sources, making up a series of words based more on telling the questioner what he wants to hear than on anything resembling a rational human thought process.



<a href

Leave a Comment