AI Start-Up Challenges Nvidia's Dominance with Beijing Support
A new artificial intelligence (AI) framework created by teams linked to China’s Tsinghua University aims to lessen dependence on Nvidia chips for AI model inference. This effort represents a significant stride towards achieving technological self-sufficiency in the country.
The framework, named Chitu, is a high-performance inference solution specifically designed for large language models (LLMs). It can function on chips produced in China, taking on the leading position occupied by Nvidia's Hopper series graphics processing units (GPUs), such as the A800 GPUs, in running specific models like DeepSeek-R1. This partnership was announced in a joint statement by the start-up Qingcheng.AI and a research team led by computer science professor Zhai Jidong at Tsinghua University.
AI frameworks like Chitu are crucial as they provide essential libraries and tools for developers to design, train, and validate complex AI models efficiently. The innovative Chitu framework, according to the company, has been open-sourced as of last Friday and includes support for popular models from organizations such as DeepSeek and Meta Platforms’ Llama series.
In tests conducted with the full version of DeepSeek-R1 using Nvidia A800 GPUs, the Chitu framework demonstrated remarkable results, achieving a 315 percent increase in model inference speed and cutting GPU usage by 50 percent when compared to foreign open-source frameworks.
As China seeks to reduce its reliance on Nvidia amidst U.S. export controls, the launch of the cost-effective DeepSeek initiative shows its ambition. The U.S. government has banned Nvidia from selling advanced H100 and H800 chips from its Hopper series to clients based in China. This presents immediate hurdles for Chinese AI companies that depend on these high-performance GPUs.
AI, Technology, Nvidia