DeepSeek delays new AI model launch over Huawei chip issues

DeepSeek delays new AI model launch over Huawei chip issues

Tech in Asia·2025-08-14 17:01

Chinese AI startup DeepSeek has delayed the launch of its R2 model due to technical issues with Huawei’s Ascend chips.

Authorities had encouraged DeepSeek to use Huawei processors instead of Nvidia’s after releasing its R1 model in January 2025.

Persistent problems during R2 training on Ascend chips forced the company to switch to Nvidia for training while keeping Huawei hardware for inference.

This setback reportedly pushed back the R2 launch from May 2025, allowing competitors to advance.

Industry insiders say Chinese-made chips still lag behind Nvidia’s in stability, connectivity, and software.

Despite Huawei sending engineers to assist, DeepSeek could not complete a successful R2 training run on the Ascend chip.

The company is still working to make the model compatible with Huawei’s hardware for inference.

.source-ref{font-size:0.85em;color:#666;display:block;margin-top:1em;}a.ask-tia-citation-link:hover{color:#11628d !important;background:#e9f6f5 !important;border-color:#11628d !important;text-decoration:none !important;}@media only screen and (min-width:768px){a.ask-tia-citation-link{font-size:11px !important;}}

🔗 Source: Reuters

🧠 Food for thought

1️⃣ Hardware bottlenecks persist despite aggressive policy interventions

DeepSeek’s training difficulties illustrate a broader challenge facing China’s semiconductor ecosystem despite years of government investment and policy pressure.

The numbers reveal the scale of the problem: in 2024, Chinese firms purchased approximately 1 million Nvidia H20 chips compared to only 450,000 Huawei Ascend 910B chips, showing market preference despite official encouragement to buy domestic alternatives1.

This preference stems from measurable performance gaps, as Huawei’s chips lag behind Nvidia’s in critical metrics, including memory bandwidth and processing capabilities1.

The production capacity constraints compound these quality issues, with Huawei expected to produce only 200,000 AI chips in 20252.

Chinese firms are estimated to be approximately two years behind in chip design and five generations behind in semiconductor manufacturing equipment3.

These technical realities explain why DeepSeek, despite having Huawei engineers on-site for support, ultimately couldn’t complete successful training runs on Ascend processors and had to revert to Nvidia chips for training while using Huawei only for the less demanding inference tasks.

2️⃣ Model performance improvements decouple from hardware limitations

Despite persistent chip constraints, Chinese AI models have narrowed the performance gap with US counterparts through innovative approaches that maximize limited computing resources.

The performance gap between US and Chinese AI models shrunk from 103 points in January 2024 to just 23 points by February 20254.

This convergence occurred even as China maintained significantly lower total compute capacity—the US holds a tenfold advantage in overall computing power5.

DeepSeek’s R1 model exemplifies this efficiency-focused approach, achieving competitive performance while being developed with significantly lower costs compared to US models6.

This suggests that raw computing power, while important for training, may not be the primary determinant of final model quality when combined with algorithmic innovations and more efficient training methodologies.

This efficiency advantage could prove strategically important as it allows Chinese companies to remain competitive in AI capabilities even while operating under continued hardware constraints.

……

Read full article on Tech in Asia

Technology