Chinese open-source artificial intelligence models now account for nearly 30% of global usage, according to data from OpenRouter and Andreessen Horowitz. The study also reveals that Chinese-language prompts rank second globally in token volume. Only English-language prompts generate more tokens for AI processing.
The global adoption of open-source large language models (LLMs) surged this year. Systems developed in China significantly drove this growth. Leading models include the Qwen family from Alibaba Group, DeepSeek-V3, and Moonshot AI’s Kimi K2. These platforms compete directly with established proprietary models.
Chat models like OpenAI’s GPT-4o and GPT-5 remain dominant. They hold a combined 70% share of the global AI model market. However, the rapid rise of Chinese open-source alternatives marks a major industry shift.
Chinese open-source LLMs started from a very low base. Their global share was just 1.2% in late 2024. Usage exploded to nearly 30% over the following few months. The report analysed an empirical dataset of 100 trillion tokens. Tokens are the fundamental data units that AI models process for tasks such as prediction and reasoning.
ollama run deepseek-v3.2:cloud
DeepSeek v3.2 is now on Ollama's cloud!
DeepSeek v3.2 on Ollama's cloud can have thinking enabled and disabled. Give it a try. It's free to get started! pic.twitter.com/YMIkzHeDY8
— ollama (@ollama) December 9, 2025
On average, Chinese AI models accounted for 13% of weekly token volume in 2025. Growth accelerated sharply in the latter half of the year. This brought their average usage to approximately 13.7%, close to the 13.7% share held by models from the rest of the world.
“China has emerged as a major force,” the report stated. It highlighted China’s role not only in domestic consumption but also in producing globally competitive models. This advancement occurred despite U.S. restrictions on China’s access to advanced Nvidia and AMD GPUs.
The report credits “competitive quality, rapid iteration, and dense release cycles” for China’s ascent. Fast release schedules from Alibaba Cloud and DeepSeek allowed users to adapt quickly. Developers could handle higher workloads more efficiently.
This is quite massive
Mistral has just released two open-source models including 'Devstral Small 2' which:
– Has only 24B parameters (28x smaller than DeepSeek 🤯)
– Can run LOCALLY on a laptop
– Is competitive with much larger models for coding
So basically everything you… pic.twitter.com/rbSOXRwjB3
— Paul Couvert (@itsPaulAi) December 9, 2025
As these models gained recognition for efficiency and cost-effectiveness, Chinese became the world’s second-most-used prompt language. It accounts for nearly 5% of all AI requests. This proportion is significantly higher than China’s approximately 1.1% share of the global internet content.
In terms of national LLM token share, China ranks fourth globally. It trails the United States, Singapore, and Germany. The open-source AI market has transformed from a DeepSeek-led monopoly in late 2024. By late 2025, the landscape had become fragmented and competitive, featuring Qwen and Kimi. No single model now surpasses a 25% market share.