Podcast Episode
The achievement is particularly notable given the technical specifications of the Huawei Ascend 910C chip, which achieves approximately 800 teraflops at FP16 precision. This represents roughly 80 percent of the computing power offered by NVIDIA's H100 chip, which has become the industry standard for training advanced AI models. While not matching the peak performance of cutting-edge Western hardware, the Ascend chips have proven sufficient for training competitive models.
These results indicate that models trained on domestic Chinese hardware can achieve competitive performance with those trained on Western chips, challenging assumptions about the technological gap between Chinese and American AI capabilities.
Rather than halting development, the sanctions appear to have accelerated China's domestic innovation. Chinese authorities have informally advised local technology companies to prioritize domestic chips over foreign alternatives, creating a parallel ecosystem for AI development that operates independently of Western supply chains.
Zhipu has made GLM-Image commercially available through multiple channels. The company offers API access for 0.1 yuan, approximately 1.4 cents per generated image, positioning it as a cost-effective option for enterprises generating marketing materials and text-heavy visual content. The model weights have also been released on GitHub, Hugging Face, and ModelScope, allowing for independent deployment and further development by the open-source community.
However, complete technological self-reliance remains a work in progress. The Ascend 910C incorporates advanced components from Taiwan Semiconductor Manufacturing Co., Samsung Electronics, and SK Hynix, indicating that fully indigenous production capabilities are still years away. New US guidance prohibits American and non-American persons from using, selling, transferring, financing, or servicing Huawei's Ascend 910B, 910C, and 910D chips, adding another layer of complexity to the technology landscape.
The achievement also demonstrates that technological leadership in AI is not solely determined by access to the most advanced hardware. Effective software frameworks, training methodologies, and architectural innovations can partially compensate for hardware limitations, allowing competitive models to be developed on less powerful chips.
The development has also encouraged other Chinese chipmakers to accelerate their efforts. Cambricon is preparing to more than triple its production of AI chips in 2026, adding to expectations of expanding local supply and further reducing dependence on foreign semiconductor technology.
China Achieves AI Breakthrough With First Major Model Trained Entirely on Domestic Chips
January 19, 2026
Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.
Chinese artificial intelligence startup Zhipu AI has released an open-source image generation model that represents a significant milestone in Beijing's push for technological self-reliance. The model, called GLM-Image, is the first state-of-the-art multimodal AI trained entirely on domestically manufactured chips, marking a major achievement amid ongoing US export restrictions on advanced semiconductor technology.
Technical Achievement and Architecture
GLM-Image was trained end-to-end on Huawei Ascend Atlas 800T A2 hardware using Huawei's MindSpore machine learning framework. The model employs a hybrid architecture combining a 9 billion parameter autoregressive model with a 7 billion parameter diffusion decoder. This represents a complete domestic technology stack, from the hardware layer through the training framework to the final model.The achievement is particularly notable given the technical specifications of the Huawei Ascend 910C chip, which achieves approximately 800 teraflops at FP16 precision. This represents roughly 80 percent of the computing power offered by NVIDIA's H100 chip, which has become the industry standard for training advanced AI models. While not matching the peak performance of cutting-edge Western hardware, the Ascend chips have proven sufficient for training competitive models.
Performance Benchmarks
GLM-Image has demonstrated strong performance on industry standard benchmarks. On the CVTG-2K benchmark, which measures text rendering accuracy, the model achieved a Word Accuracy score of 0.9116, ranking first among open-source alternatives. On the LongText-Bench test, it scored 0.952 for English and 0.979 for Chinese, leading its category in both languages.These results indicate that models trained on domestic Chinese hardware can achieve competitive performance with those trained on Western chips, challenging assumptions about the technological gap between Chinese and American AI capabilities.
Strategic Context and US Sanctions
The development carries significant strategic implications for Zhipu AI, which the US Commerce Department added to its Entity List in January 2025. The designation was made over allegations that the company was advancing China's military modernization through AI development. This effectively cut the Beijing-based company off from accessing NVIDIA H100 and A100 GPUs, which have become standard equipment for training advanced AI models worldwide.Rather than halting development, the sanctions appear to have accelerated China's domestic innovation. Chinese authorities have informally advised local technology companies to prioritize domestic chips over foreign alternatives, creating a parallel ecosystem for AI development that operates independently of Western supply chains.
Market Response and Commercial Availability
The announcement triggered a substantial market response, with shares in Zhipu, officially listed as Knowledge Atlas Technology JSC Ltd., surging more than 17 percent. The move sparked a broader rally in Chinese chipmaking stocks as investors recognized the implications for China's semiconductor industry and AI development capabilities.Zhipu has made GLM-Image commercially available through multiple channels. The company offers API access for 0.1 yuan, approximately 1.4 cents per generated image, positioning it as a cost-effective option for enterprises generating marketing materials and text-heavy visual content. The model weights have also been released on GitHub, Hugging Face, and ModelScope, allowing for independent deployment and further development by the open-source community.
Recent Corporate Developments
The GLM-Image release comes just one week after Zhipu debuted on the Hong Kong Stock Exchange, becoming the first among China's AI tigers, a group of leading startups at the forefront of the country's AI development, to go public. The initial public offering raised HK$4.17 billion, with 70 percent earmarked for research and development activities. Since the IPO, Zhipu's shares have jumped more than 80 percent as investors buy into China's AI industry and domestic chip ambitions.Huawei's Expanding AI Ecosystem
Huawei has detailed an ambitious multi-year roadmap for its Ascend AI processors. The Ascend 950PR is scheduled for release in Q1 2026, with the Ascend 950DT following later in the year. Production is ramping up significantly, with projections indicating Huawei could reach 600,000 units by 2026. This expansion threatens to erode NVIDIA's market share in the region, particularly as Chinese companies increasingly adopt domestic alternatives.However, complete technological self-reliance remains a work in progress. The Ascend 910C incorporates advanced components from Taiwan Semiconductor Manufacturing Co., Samsung Electronics, and SK Hynix, indicating that fully indigenous production capabilities are still years away. New US guidance prohibits American and non-American persons from using, selling, transferring, financing, or servicing Huawei's Ascend 910B, 910C, and 910D chips, adding another layer of complexity to the technology landscape.
Implications for Global AI Development
The successful training of GLM-Image on domestic hardware raises fundamental questions about the effectiveness of technology export controls. Rather than constraining China's AI capabilities, the restrictions appear to be driving the development of a completely separate AI hardware and software ecosystem. This parallel development path has significant implications for the global technology landscape, potentially leading to divergent standards, architectures, and capabilities between Chinese and Western AI systems.The achievement also demonstrates that technological leadership in AI is not solely determined by access to the most advanced hardware. Effective software frameworks, training methodologies, and architectural innovations can partially compensate for hardware limitations, allowing competitive models to be developed on less powerful chips.
Industry Reactions
In a statement, Zhipu emphasized the broader significance of their achievement, stating that it proves the feasibility of training high-performance multimodal generative models on a domestically developed full-stack computing platform. The company expressed hope that the release can provide valuable reference for the community to explore the potential of domestic computing power.The development has also encouraged other Chinese chipmakers to accelerate their efforts. Cambricon is preparing to more than triple its production of AI chips in 2026, adding to expectations of expanding local supply and further reducing dependence on foreign semiconductor technology.
Published January 19, 2026 at 10:50am