
GitHub has announced that it will move to a usage-based billing model for its GitHub Copilot AI service starting June 1. The move is billed as a necessary step to “better align pricing with actual usage” and keep Copilot financially sustainable amid increasing demand for limited AI computing resources.
GitHub Copilot customers currently receive an allotment of monthly “Requests” and “Premium Requests,” which are spent when they ask Copilot for help with AI models. But GitHub says those broad categories cover many different AI tasks with a wide range of total backend computing costs.
“Today, a quick chat question and a multi-hour autonomous coding session can cost a user the same amount of money,” the Microsoft-owned company wrote in its announcement. And while GitHub says it has “absorbed the rising estimated costs behind that usage” up to this point, lumping all “premium requests” together is “no longer sustainable.”
Under the new pricing system, GitHub Copilot customers will receive a monthly allotment of “AI Credits” that matches their monthly subscription payment. Pricing for additional AI usage beyond those credits will be calculated “based on token consumption, including inputs, outputs, and cached tokens, using the API rates listed for each model.”
Those API rates can vary greatly depending on the sophistication of the model being used; For example, OpenAI’s high-end GPT models currently range in price from $4.50 per million output tokens (GPT-5.4 mini) to $30 per million output tokens (GPT-5.5). The total number of tokens used for an individual AI prompt can also vary significantly, depending on how much “thinking” time the model needs to produce its output.
GitHub Copilot customers will still be able to use simple AI suggestions like code completion and next edit without consuming AI credits. But Copilot code reviews will come with an additional cost in the form of GitHub Actions Minutes.
<a href