China’s AI Models Are Now Matching America’s Best , Washington Is Running Out of Options
When DeepSeek launched its R1 model in the winter of 2025 and the benchmark results began to circulate in AI research circles, Silicon Valley’s response was a mix of amazement and unease. According to the evidence that is currently available, the model was constructed at a small fraction of the cost using hardware that was supposedly unavailable to Chinese labs under the export controls Washington had put in place, and it performed similarly to the best American systems on standard reasoning tasks.
Since then, the tale of how that occurred has been discussed, but it hasn’t yielded a comforting explanation for anyone hoping that the limitations were operating as intended. In 2026, the picture is both more complex and clearer.
| Category | Details |
|---|---|
| Key Chinese Models | DeepSeek (DeepSeek AI) and Qwen (Alibaba) — matching U.S. top models in reasoning benchmarks at significantly lower cost |
| Cost Differential | Chinese models operating at approximately 40x lower cost than leading U.S. AI systems on comparable tasks |
| Qwen Downloads | 942 million downloads — Alibaba’s Qwen model family surpassing many Western competitors on open-source platforms |
| Hardware Independence | Huawei’s Ascend 910D chip being used to train and run large models — reducing dependency on NVIDIA despite export restrictions |
| U.S. Export Controls | Restrictions on high-end AI chips to China — originally designed to slow Chinese AI development; widely assessed to have accelerated efficiency-focused innovation instead |
| Open-Source Strategy | Chinese labs releasing models openly to build global adoption — contrasting with many closed U.S. systems and gaining significant international traction |
| Independent Ecosystem | Beijing building domestic AI infrastructure — compute, software, and training pipelines — reducing the leverage of U.S. sanctions over time |
| Global Adoption Risk | As Chinese AI becomes cheaper and more accessible internationally, some markets that might have defaulted to U.S. technology are choosing alternatives |
| Washington’s Position | Export controls remain in place but their effectiveness is diminishing; no clear successor policy has emerged |
| Further Reading | Analysis and policy coverage at CSIS Strategic Technologies Program |
The U.S. continues to lead in some areas of model scale and infrastructure depth, and Chinese AI models are not consistently superior to their American counterparts. However, conventional benchmarks no longer clearly show the performance disparity that existed three or four years ago, when Chinese labs were plainly trying to reduce the gap and American systems were much superior on most criteria.
With 942 million downloads on open-source platforms, Alibaba’s Qwen family has achieved a level of global adoption that American closed-source models are unable to directly match. The export controls were intended to limit brute-force compute, but DeepSeek’s efficiency-focused architecture has shown that there are other ways to achieve capable AI. China took a different route. The limitations might have sped up the hunt for it.
The absurdity of the export control scheme is most apparent in the hardware scenario. By restricting access to the most potent training infrastructure, the limitations on NVIDIA’s high-end CPUs were meant to set a ceiling on Chinese AI capability. Instead, domestic alternatives accelerated. Large models are currently being trained and run in China using Huawei’s Ascend 910D.
Although it still lags behind NVIDIA’s H100 in terms of raw performance, the difference has been closing as Chinese labs have mastered the use of distributed training techniques and algorithmic efficiency to get around hardware limitations. The export restrictions encouraged the construction of what they prohibited. They weren’t supposed to operate that way.
The issue is further complicated by the open-source component. Many of the top AI systems in the United States are still closed, with proprietary APIs that include usage limitations and access constraints. Chinese labs have adopted open-source publication as a default method for global reach since they operate in a setting that makes it difficult for them to participate in Western commercial marketplaces otherwise. Anybody, anywhere, can download, modify, and use models that are made publicly available without the licensor’s knowledge or supervision.

Chinese users are not the main source of Qwen’s 942 million downloads. They show adoption in South America, Africa, Southeast Asia, and other areas where the cost differential between Chinese and American AI plays a significant role in technological choices. Regardless of what Washington’s policy states, this adoption creates switching costs that favor continuing Chinese use.
It’s difficult to ignore the fact that the United States now has fewer options than it did two years ago, and that this shrinking has occurred more quickly than the policy apparatus was designed to handle.
The results have frequently called into question the export controls’ ability to sustain a long-lasting performance difference, even though they are still in existence and are growing in scope as new workarounds are discovered. It is truly unclear what will happen next, whether it is increased domestic investment, new diplomatic frameworks, or something that no one in Washington has yet to identify. The race is still ongoing. Simply said, it is now less readable.