OpenAI Really Wants Codex to Shut Up About Goblins

There is a haunting problem with OpenAI.

Instructions designed to dictate the behavior of the company’s latest models when writing code have been revealed to include a line, repeated multiple times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and clearly relevant to the user’s query,” read instructions in the Codex CLI, a command-line tool for using AI to generate code.

It’s unclear why OpenAI felt obliged to describe this for the Codex – or indeed why its models would want to discuss ghosts or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s latest model, GPT-5.5, was released earlier this month with advanced coding skills. The company is in tough competition with rivals, especially Anthropic, to provide cutting-edge AI, and coding has emerged as an antidote.

However, in response to a post on

“I was wondering why my claw suddenly became a ghost with Codex 5.5,” one user wrote on X.

“Been using this a lot lately and literally can’t stop referring to bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery soon became its own meme, inspiring AI-generated visuals of goblins in data centers and a plug-in for the codec that put it into a playful “goblin mode”.

AI models like GPT-5.5 are trained to predict the word or code that should follow a given hint. These models have become so good at doing this that they appear to demonstrate genuine intelligence. But their probabilistic nature means they can sometimes behave in surprising ways. A model may be more susceptible to abuse when used with an “agent harness” such as OpenGL that inserts a lot of extra instructions into the signals, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February, shortly after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like responding to emails or purchasing things on the web. Users can choose from a variety of personalities for their assistant, which shapes its behavior and reactions.

OpenAI employees appeared to accept the prohibition. In response to a post highlighting OpenClaw’s ghosting tendencies, Nick Pash, who works on the codex, wrote, “This is actually one of the reasons.”

Even OpenAI CEO Sam Altman joined in on the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the entire cluster. Extra ghosts.”



<a href

Leave a Comment