
To work around those rules, Humanizer asks SkillsCloud to replace inflated language with clear facts and offers this example change:
First: “The Institute of Statistics of Catalonia was officially established in 1989, which was an important moment in the development of regional statistics in Spain.”
Later: “The Institute of Statistics of Catalonia was founded in 1989 to collect and publish regional statistics.”
The cloud will read it and do its best as a pattern-matching machine to produce output that matches the context of the conversation or task.
An example of why AI writing recognition fails
Even with such a reliable set of rules devised by Wikipedia editors, we have previously written about why AI writing detectors do not work reliably: there is nothing inherently unique about human writing that differentiates it from LLM writing.
One reason is that even though most AI language models gravitate toward certain types of language, they may also be motivated to avoid them, as is the case with the Humanizer skill. (Although sometimes this is very difficult, as OpenAI has found in its years-long struggle against EM Dash.)
Also, humans can write in chatbot-like ways. For example, this article is likely to have some “AI-written traits” that trigger AI detectors, even if it is written by a professional writer – especially if we use a single em dash – because most LLMs picked up writing techniques from examples of professional writing scraped from the web.
Along those lines, there is a caveat worth noting in the Wikipedia guide: while the list states some obvious, say, unchanged chatgpt usage, it is still made up of comments, not iron-clad rules. The 2025 preprint cited on the page found that heavy users of large language models correctly identified AI-generated articles about 90 percent of the time. This sounds great until you realize there are 10 percent false positives, which is potentially enough to ruin some quality writing attempting to detect AI sloppiness.
Taking a step back, perhaps this means that AI detection work may need to mark particular phrases and dig more deeply into the actual factual content of the work (see what I did there?).
<a href