
OpenAI has not achieved its goal of developing superintelligence or artificial general intelligence, nor has it completed its planned creation of an autonomous “AI researcher”. But it has figured out how to get ChatGPT to stop misuse of EM Dash. So, that’s something.
In a post on X, CEO Sam Altman declared, “If you tell ChatGPT not to use em-dashes in your custom directives, it finally does what it should!” He described this development as a “small-but-happy victory”. The company confirmed the ability to reduce the chatbot’s reliance on punctuation in a post on Threads, where it asked ChatGPT to write a formal apology for “ruining the em dash”. Specifically, the chatbot was not able to write an apology without using the em dash.
To that point, it seems there is a notable distinction to be made here. OpenAI hasn’t figured out how to make ChatGPT use the em dash more appropriately or how to deploy it by default in a more restrained way. Instead, it has given users the ability to tell ChatGPT not to use it, a change that can be made within the chatbot’s personalization settings.
This capability follows the release of OpenAI’s latest model, GPT-5.1. One of the primary points of improvement the company highlighted in the rollout of the new model was the fact that GPT-5.1 is apparently better at following instructions and offers more personalization features. So the EM Dash clampdown appears to be just an example of how users can utilize the more tailored sensibilities of the model rather than some improvement in the overall output of the underlying model.
The fact that the EM dash fix is something that has to be done on a user-by-user basis probably reflects how much of a black box most LLMs are. In fact, in Altman’s answers on OpenAI’s presentation of personalization as a solution seems to indicate that it is still really hard to find a solution at scale.
It appears that the company has figured out a way to give custom instructions more importance in its calculations when responding to a prompt, which can produce a result similar to not using em dashes in someone’s version of ChatGPT. But it still seems like the company can’t figure out why the problem initially occurred or persists. It’s no surprise that the company has been leaning heavily toward personalization and talking less about AGI lately.