OpenAI Rolls Back ChatGPT’s Model Router System for Most Users

OpenAI is quiet Bringing about a sea change in the way millions of people use ChatGPT.

On a low-profile blog that tracks product changes, the company said it withdrew ChatGPT’s Model Router — an automated system that routes complex user questions to a more advanced “logic” model — for users on its free and $5-per-month Go tiers. Instead, those users will now default to GPT-5.2 Instant, the fastest and cheapest service version of OpenAI’s new model series. Free and Go users will still be able to access reasoning models, but they will have to select them manually.

The model router was launched just four months ago as part of OpenAI’s effort to unify the user experience with the introduction of GPT-5. This feature analyzes user questions before choosing whether ChatGPT answers them with a faster-responding, cheaper-to-serve AI model or a slower, more expensive reasoning AI model. Ideally, the router should direct users to OpenAI’s smartest AI models exactly when they need them. Previously, users accessed advanced systems through a confusing “model picker” menu; A feature about which CEO Sam Altman said the company “hates as much as you do.”

In practice, the router appears to send many more free users to OpenAI’s advanced reasoning models than those that are more expensive to OpenAI. Shortly after its launch, Altman said the router had increased use of the reasoning model among free users from less than 1 percent to 7 percent. This was an expensive bet aimed at improving ChatGPT’s answers, but the model router was not as widely accepted as OpenAI had hoped.

A source familiar with the matter told WIRED that the router has negatively impacted the company’s daily active user metric. While logic models are widely seen as the limit of AI performance, they can spend minutes working through complex queries at significantly higher computational costs. Most consumers don’t want to wait, even if it means getting a better answer.

According to Chris Clark, chief operating officer of AI inference provider OpenRouter, fast-responding AI models continue to dominate general consumer chatbots. He says the speed and tone of responses on these platforms are of paramount importance.

“If someone types something, and then you have to show the thinking point for 20 seconds, that’s not very engaging,” says Clark. “For general AI chatbots, you’re competing with Google [Search]Google has always focused on making search as fast as possible; They were never like, ‘Oh my God, we’ve got to get a better answer, but do it slow,’



<a href

Leave a Comment