Anthropic has announced higher usage limits for Claude paid users as it expands its compute infrastructure in partnership with
SpaceX. The company said the increased infrastructure capacity will directly improve availability for Claude Pro and Claude Max subscribers. The only tier excluded from the higher usage limits is the free plan.
According to Anthropic, its partnership with SpaceX will provide access to the entire compute capacity of SpaceX’s Colossus 1 data centre. The company said this will add more than 300 megawatts of compute capacity, including access to over 220,000 NVIDIA GPUs within a month.
Higher usage limits for Claude
Anthropic said it is introducing three major changes to Claude usage limits with immediate effect for paid users.
First, the company said it is doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.
Second, Anthropic said it is removing peak-hour limit reductions for Pro and Max users of Claude Code.
The company also announced significantly higher API rate limits for Claude Opus models.
According to the updated limits shared by Anthropic, Tier-I users will now get up to 500,000 maximum input tokens per minute, up from 30,000 earlier. Maximum output tokens per minute for the same tier have increased from 8,000 to 80,000.
Tier-II input token limits have increased from 450,000 to 2 million per minute, while output token limits have gone up from 90,000 to 200,000.
Tier-III users will now get up to 5 million input tokens and 400,000 output tokens per minute.
Tier-IV users will receive up to 10 million input tokens and 800,000 output tokens per minute.
What are rate limits and tokens
For the uninitiated, rate limits define how much users can interact with Claude within a specific time period.
In Claude Code, these limits affect how many coding requests, prompts, or AI-assisted tasks users can run over a five-hour window, helping manage server load and overall platform availability.
Input tokens per minute refer to how much text or data users can send to Claude within one minute through the API.
Tokens are small units of text used by AI models to process prompts, documents, code, or conversations.
Higher token limits allow users to handle larger workloads and more complex AI tasks faster.
Anthropic’s AI infrastructure expansion
Anthropic said the SpaceX agreement is part of a broader push to expand its AI infrastructure capacity globally. The company highlighted previously announced compute partnerships, including an agreement with Amazon for up to 5 gigawatts of compute capacity, with nearly 1 gigawatt expected by the end of 2026.
Anthropic also referenced a separate 5-gigawatt agreement involving Google and Broadcom that is expected to begin coming online in 2027.
In addition, the company said it has a strategic partnership with Microsoft and NVIDIA involving $30 billion worth of Azure capacity, alongside a $50 billion investment in American AI infrastructure with Fluidstack.
Anthropic said it currently trains and operates Claude using a mix of AWS Trainium chips, Google TPUs, and NVIDIA GPUs.
Expansion plans
The company said some of its future capacity expansion will focus on international markets, particularly to support enterprise customers in sectors such as healthcare, financial services, and government that require localised infrastructure for compliance and data residency requirements.
According to Anthropic, its collaboration with Amazon will include additional inference infrastructure in Asia and Europe.
The company added that it plans to expand infrastructure primarily in democratic countries with legal and regulatory frameworks that support large-scale AI investments and secure supply chains.
Anthropic also said it is exploring ways to extend its commitment to offset consumer electricity price increases linked to its US data centres into other regions as part of its broader international expansion plans.