Chinese open-source artificial intelligence models now account for nearly 30% of global AI usage, according to a recent report. Chinese-language prompts rank second in volume behind English.
The surge is driven by Chinese-developed systems, including Alibaba Cloud’s Qwen models, DeepSeek’s V3, and Moonshot AI’s Kimi K2. Western proprietary models, such as OpenAI’s GPT-4o and GPT-5, remain dominant with a 70% global share.
OpenRouter, a model aggregator, and Andreessen Horowitz analyzed 100 trillion tokens to track usage. Chinese open-source models grew from just 1.2% in late 2024 to nearly 30% by late 2025. Tokens are units of data used by AI to generate, predict, and reason.
Chinese LLMs averaged 13% of weekly token volume this year, almost matching models from the rest of the world at 13.7%. The report highlights China’s emergence as a close peer of the US in AI model development, despite restrictions on access to high-end GPUs.
The report also notes that China’s rise reflects fast iteration, frequent releases, and competitive quality. Aggressive release schedules by Qwen and DeepSeek allowed users to quickly adapt to higher workloads.
Chinese prompts have become the second most used language globally, making up nearly 5% of all requests. This is far higher than the Chinese language’s share of the internet, estimated at about 1.1%.
The global open-source AI landscape has shifted from a DeepSeek-led monopoly in December 2024 to a competitive environment. No single model now holds more than 25% of usage, showing a diverse and rapidly evolving market.