
OpenAI is trying to lure more developers and Vibe coders (who create software using AI models and natural language) away from rivals like Anthropic.
Today, the firm most synonymous with the generic AI boom has announced that it will launch a new, more mid-range subscription tier offering – a $100 ChatGPT Pro plan – that joins its Free, Go ($8 monthly), Plus ($20 monthly), and Pro ($200 monthly) plans for individuals who use ChatGPT and related OpenAI products.
OpenAI also currently offers Edu, Business ($25 per user monthly, formerly known as Team) and Enterprise (at different prices) plans for organizations in the above sectors.
Why offer a $100 monthly ChatGPT Pro plan?
So why introduce a new $100 ChatGPT Pro plan?
OpenAI’s big selling point is that the new plan offers five times the usage limit on Codex, the company’s agentic vibe coding application/harness (the name is shared by both, as well as a lineup of coding-specific language GModels), which seems reasonable given the math ($20×5=$100).
As Sam Altman, co-founder and CEO of OpenAI, wrote in a post on X: "It’s great to see Codex getting so much love. We’re launching a $100 ChatGPT Pro tier based on popular demand."
However, at the same time, OpenAI’s official company account on X noted that "We are rebalancing codecs usage [ChatGPT] Also supporting more sessions throughout the week rather than longer sessions in a single day."
Looks like OpenAI is also together Shortage How much ChatGPT Plus users can use its codecs harness and applications per day.
What are the new usage limits for the new $100 ChatGPT Pro plan vs the $20 Plus?
So, what are the current limits on the $20 Plus plan? The new Pro plan gives you 5x more…what?
Turns out, this is trickier to calculate than you might think, because it really depends on what underlying AI model you’re using to power the codecs application or harness, and whether you’re working on code stored in the cloud or locally on your machine or server.
OpenAI’s developer website notes that for individual users, usage is classified according to "local message" (tasks run on the user’s machine) and "cloud work" (The tasks run on OpenAI infrastructure), both sharing a five-hour rolling window. Currently, it actually shows that the $100 Pro plan lets you send 10 times more messages than the $20 Plus plan (see below)!
ChatGPT Plus ($20/month)
- GPT-5.4: 33-168 local messages every 5 hours.
-
GPT-5.4-mini: 110-560 local messages every 5 hours.
-
gpt-5.3-codecs: 45-225 local messages and 10-60 cloud tasks every 5 hours.
-
Code Reviews: 10-25 pull requests per week
ChatGPT Pro 5X ($100/month)
-
GPT-5.4: 330-1680 local messages every 5 hours.
-
GPT-5.4-mini: 1100-5600 local messages every 5 hours.
-
gpt-5.3-codecs: 450-2,250 local messages and 100-600 cloud tasks every 5 hours.
-
Code Reviews: 100-250 pull requests per week
ChatGPT Pro 20x ($200/month)
-
GPT-5.4: 660-3,360 local messages every 5 hours.
-
GPT-5.4-mini: 2,200-11,200 local meEssay every 5 hours.
-
gpt-5.3-codecs: 900-4,500 local messages and 200-1,200 cloud tasks every 5 hours.
-
Code Reviews: 200-500 pull requests per week
-
exclusive access: This includes GPT-5.3-codecs-spark (research preview), which has its own dynamic usage limits.
And as stated in OpenAI’s help documentation:
"The number of codec messages you can send within these limits varies depending on the size and complexity of your coding tasks and where you perform the tasks. Small scripts or simple functions may consume only a fraction of your allowance, while larger codebases, long-running tasks, or extended sessions that require the codec to keep more context will use significantly more per message."
Larger strategic implications and context
OpenAI’s sudden move toward the $100 price point and expansion of agentic capabilities comes amid the unprecedented financial rise of its main rival, Anthropic.
Just a few days ago, Anthropic revealed that its annual run-rate revenue (ARR) has topped $30 billion, surpassing OpenAI’s last reported ARR of around $24-$25 billion.
This growth has been fueled by the mass adoption of Cloud Code and Cloud Cowork, products that have set the benchmark for enterprise-grade autonomous coding.
Competitive friction intensified on April 4, 2026, when Anthropic officially stopped using cloud subscriptions to provide intelligence for third-party agent AI harnesses such as OpenClaw.
To be clear, the Anthropic cloud models themselves can still be used with OpenClave, users will now just have to pay for access to the cloud models through Anthropic’s application programming interface (API) or additional usage credits, rather than as part of monthly cloud subscription tiers (which some have compared). "all you can eat" The buffet makes the economics challenging for Anthropic when power users and third parties like OpenClave consume more than $20 or $200 monthly user spend on plans in harness tokens).
OpenClaw’s creator, Peter Steinberger, was appointed by OpenAI in February 2026 to lead its personal agent strategy, and since joining, he has actively spoken out against Anthropic’s limitations – advising that OpenAI’s codecs and models generally do not have the same restrictions as Anthropic.
By hiring Steinberger and subsequently launching a Pro tier that offers the recently restricted high-volume capacity Anthropic, OpenAI is effectively attracting the displaced OpenClaw community to reclaim the professional developer market.
<a href