Nvidia Lauds DeepSeek AI from China Despite 17% Stock Drop
Jakarta - The American chipmaker Nvidia has praised the Chinese artificial intelligence (AI) model DeepSeek as an extraordinary advancement in the field of AI. This recognition comes at a time when Nvidia’s stock price experienced a significant decline of 17 percent on Monday (January 27, 2025).
According to a spokesperson from Nvidia, "DeepSeek represents an outstanding achievement in AI and serves as a perfect example of test time scaling. The work done by DeepSeek illustrates how new models can be created through these techniques, leveraging widely available models while ensuring full compliance with export regulations," as reported by CNBC International on Tuesday (January 28, 2025).
This endorsement from Nvidia followed the release of DeepSeek's model R1, which is said to be capable of outperforming top models from American companies like OpenAI.
The training cost for R1 was reported to be less than $6 million, a stark contrast to the billions spent by Silicon Valley firms on developing their own AI models.
Nvidia’s statement suggests that they perceive DeepSeek's breakthroughs as opportunities to boost demand for their graphics processing units (GPUs). The company noted, "Inference requires a significant number of Nvidia GPUs and high-performing networks. Currently, we have three scaling laws: pre-training, post-training, and the new test time scaling that is ongoing."
Additionally, Nvidia clarified that the GPUs used by DeepSeek fully adhere to export standards. This comes in response to comments from Scale AI CEO Alexandr Wang, who suggested that DeepSeek might be utilizing Nvidia GPUs that are banned in mainland China. DeepSeek claims to use Nvidia GPUs specially designed for the Chinese market.
Analysts are now questioning whether the billions invested by companies such as Microsoft, Google, and Meta in Nvidia-based AI infrastructure might be wasted, given that similar outcomes can be achieved at much lower costs.
Earlier this month, Microsoft announced plans to invest $80 billion in AI infrastructure by 2025. Meanwhile, Meta's CEO Mark Zuckerberg stated that the company intends to allocate between $60 to $65 billion in capital expenditures for AI in the same timeframe.
Nvidia's remarks also align with recent discussions among CEO Jensen Huang, OpenAI's Sam Altman, and Microsoft's Satya Nadella, all of whom have addressed themes surrounding AI and its scaling.
The AI boom and the high demand for Nvidia GPUs have been driven by the scaling laws—a concept in AI development proposed by OpenAI researchers in 2020. These laws assert that better AI systems can be developed by increasing the amount of computation and data used to build new models, subsequently requiring more chips.
Since November, Huang and Altman have started to focus on a new aspect of scaling known as test time scaling. This concept suggests that if a trained AI model utilizes additional computational power for a longer duration when making predictions or generating text and images, it can achieve more accurate results than if it operates for a shorter time.
This form of test time scaling has already been applied in several OpenAI models, such as O1, as well as in DeepSeek's groundbreaking R1 model, which represents Chinese developments in the AI sector.
Nvidia, DeepSeek, AI