Chinese AI startup DeepSeek unveiled R1-0528, a major upgrade to its R1 reasoning model and surpasses Alibaba’s Qwen3 and matches top global competitors like OpenAI’s O3 and Google’s Gemini2.5-Pro.
The South China Morning Post reported that the DeepSeek R1-0528 model has reclaimed its lead in open-source AI with enhanced reasoning, creative writing, and coding capabilities.
DeepSeek’s R1-0528 excels in math, coding, and logic benchmark tests, achieving a 50% reduction in AI “hallucinations” through advanced post-training. Its ability to craft human-like essays, fiction, and code positions it as a leader in AI model upgrades. The model’s efficiency stems from optimised computing resources, per DeepSeek’s statement.
🚀 DeepSeek-R1-0528 is here!
🔹 Improved benchmark performance
🔹 Enhanced front-end capabilities
🔹 Reduced hallucinations
🔹 Supports JSON output & function calling
✅ Try it now: https://t.co/IMbTch8Pii
🔌 No change to API usage — docs here: https://t.co/Qf97ASptDD
🔗… pic.twitter.com/kXCGFg9Z5L
— DeepSeek (@deepseek_ai) May 29, 2025
After losing ground to Alibaba’s Qwen3 in April, R1-0528’s benchmarks show it outperforming its rival, reclaiming the top spot among Chinese models. AI consultancy Artificial Analysis ranked DeepSeek second globally, trailing only OpenAI, surpassing xAI, Meta, and Anthropic in the Intelligence Index, as noted in Reuters. The DeepSeek vs OpenAI race highlights narrowing gaps between open-source and closed models.
The Chinese AI innovation has sparked rapid adoption. Chinese tech leaders like Tencent, Baidu, and ByteDance integrated R1-0528 into their cloud platforms, while global startups like Fireworks AI and Hyperbolics followed suit. DeepSeek also introduced a distilled model, DeepSeek-R1-0528-Qwen3-8B, matching Qwen3-235B’s performance with 30x smaller parameters, promising efficient AI for research.
The DeepSeek R1-0528 launch underscores China’s rising AI prowess, challenging global giants and advancing open-source innovation. Its adoption across industries and potential for scalable, lightweight models could reshape AI development worldwide.