DeepSeek’s R2 Model Could Shake Up Global AI Market

January 3, 2026
DeepSeek

Chinese AI startup DeepSeek has introduced a new method for training artificial intelligence, highlighting China’s efforts to compete with companies like OpenAI despite limited access to advanced chips from Nvidia.

The company published a paper co-authored by founder Liang Wenfeng outlining a framework called Manifold-Constrained Hyper-Connections. This approach aims to make AI systems more scalable while reducing the computing power and energy needed for training.

DeepSeek is known for surprising the AI industry. Last year, its R1 reasoning model was released at a fraction of the cost of similar models from Silicon Valley. The company has since launched several smaller platforms and is now preparing its next flagship AI, widely expected to be called R2, which could debut around China’s Spring Festival in February.

Chinese AI startups face major challenges due to US restrictions on high-end semiconductors. These limitations have forced firms like DeepSeek to explore unconventional methods and system architectures.

Analysts from Bloomberg Intelligence say DeepSeek’s upcoming R2 model could shake up the global AI landscape. Despite Google’s recent success with its Gemini 3 model, China’s cost-efficient models already occupy two of the top-15 spots in global rankings.

DeepSeek released its latest research on open platforms arXiv and Hugging Face, listing 19 authors. The study focuses on solving issues such as training instability and limited scalability. Tests were conducted on models ranging from 3 billion to 27 billion parameters, building on prior research into hyper-connection architectures.

The company says the new method could influence the next generation of foundational AI models, potentially shaping the way large-scale AI systems are designed worldwide.

Go toTop