OpenAI has not achieved its goal of developing a superintelligence or artificial general intelligence, nor has it cracked its planned construction of an autonomous “AI researcher.†But it has figured out how to get ChatGPT to stop misusing the em dash. So, that’s something.
In a post on X, CEO Sam Altman announced, “If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it’s supposed to do!†He called the development a “small-but-happy win.†The company confirmed the capability to cut off the chatbot’s reliance on the punctuation mark in a post on Threads, where it made ChatGPT write a formal apology for “ruining the em dash.†Notably, the chatbot was not able to write the apology without using an em dash.
To that point, it seems like there’s a noteworthy distinction to be made here. OpenAI has not figured out how to get ChatGPT to use the em dash in a more appropriate manner or to deploy it more sparingly by default. Instead, it has simply given users the ability to tell ChatGPT not to use it, a change that can be made within the chatbot’s personalization settings.
That ability follows the release of GPT-5.1, the latest model from OpenAI. One of the primary points of improvement that the company hammered in its rollout of the new model was the fact that GPT-5.1 is apparently better at following instructions and offers more personalization features. So the em dash clampdown appears to just be one example of how users can make use of the model’s more compliant sensibilities rather than some fix to the underlying model’s overall output.
The fact that the em dash fix is something that has to happen on a user-by-user basis probably speaks to just how much of a black box most LLMs are. In fact, there are users in Altman’s replies on X showing that, despite the instruction, their instance of ChatGPT continues to spit out em dashes. OpenAI’s presentation of personalization as a solve would seem to suggest that finding a solution at scale is still really, really hard.
The company has seemingly figured out a way to weight custom instructions more heavily within its calculations when producing a response to a prompt, which can produce a result like a person’s version of ChatGPT no longer using em dashes. But it still seems like the company can’t figure out why the problem happened in the first place or persists. No wonder the company is leaning heavily into personalization and talking less about AGI lately.
Original Source: https://gizmodo.com/chatgpt-achieves-a-new-level-of-intelligence-not-using-the-em-dash-2000686253
Original Source: https://gizmodo.com/chatgpt-achieves-a-new-level-of-intelligence-not-using-the-em-dash-2000686253
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
