An Unbothered Jimmy Wales Calls Grokipedia a ‘Cartoon Imitation’ of Wikipedia

jimmy wales wikipedia june 11 2025

In our increasingly confusing online experience, the last bastions of the Internet’s early egalitarian promise shine like diamonds. These relics of the Golden Age have somehow remained useful and pure from corporate greed, while being constantly under siege for their dogmatism. The crown jewel of these giants is Wikipedia. Powered by a large number of volunteer editors and generous donations since 2001, the humble open-source encyclopedia is generally considered our best attempt yet to collect the sum of all human knowledge. Free, citation-packed, and always self-auditing, it’s no wonder that many people consider the online encyclopedia to be one of the few wonders of the digital world.

Besides countless benefits for humans, this font of free information has also made model-training much easier for AI companies. But once Wikipedia-trained models started spitting out facts that matched reality’s well-known liberal bias and pierced the industry’s echo chamber bubble, some people became unhappy. Cognitive dissonance now at the wheel, he declared Wikipedia another victim of the “woke mind virus” and set out to build his own Library of Alexandria. Leading this crusade is Elon Musk, who launched an AI-powered competitor, Growkepedia, last October.

Speaking at India’s AI Impact Summit in New Delhi this week, Wikipedia co-founder and spokesperson Jimmy Wales was asked about the threat to the site from Grokpedia and its ilk. Unperturbed, he dismissed the XAI project as “a cartoon imitation of an encyclopedia”.

Wales singled out the humans behind Wikipedia, and the expertise and due diligence they provide, as critical ingredients to the site’s success.

“Why do I go to Wikipedia? I go to Wikipedia because it’s human-verified knowledge,” Wells explained. “We wouldn’t even for a second consider letting AI write Wikipedia articles today because we know how bad they can be.”

Wells described the tendency for AI models to “hallucinate” inaccurate, misleading, or tangential information as their primary disqualifying factor. And he is not wrong. The 2025 OpenAI study showed that their advanced models were still hallucinating at rates up to 79% in some trials.

As Wells explained, these types of errors become even more common and obvious when AI is asked to delve deeper into a subject – which may already be specialized. Where AI models fail here, their human counterparts shine. Wells promoted these subject-matter experts – “obsessors” – as the best protectors against inaccuracies and providers of optimal knowledge-seeking experiences.

Wells said, “The full, rich human context of this kind of understanding is really quite important in terms of understanding both what the reader wants and what the reader needs.”

If anything, Wells did Grokipedia a favor by keeping the conversation hallucination-centric. Many journalists and critics have already delved into the many controversies generated by Musk’s white nationalist, navel-gazing replica.

Even though Wikipedia is still the universally agreed upon repository of worldly information, a major issue still remains. We are no longer debating a shared reality. With Growikipedia, a clear rival has been created. And the more people use it, the further we will move toward reconnecting our two worlds.



<a href

Leave a Comment