News

More specifically, DeepSeek claims it trained its V3 (chat) AI model using a cluster of 2,048 Nvidia H800 GPUs, which cost approximately $5.5 million.
Huawei has released new benchmark results showing its CloudMatrix 384 AI infrastructure outperforming Nvidia's H800 GPU in ...
Initially trained on NVIDIA H800 GPUs, the Ascend 910C chips are set to rival NVIDIA's H100. Mass production of these chips is anticipated to start in early 2025.
Jim Cramer believes DeepSeek no longer poses a threat to outpace tech giants in the artificial intelligence sector.
Worse for Nvidia, the state-of-the-art V3 LLM was trained on just 2,048 of Nvidia’s H800 GPUs over two months, equivalent to about 2.8 million GPU hours, or about one-tenth the computing power ...
The company says it used a little more than 2,000 Nvidia H800 GPUs to train the bot, and it did so in a matter of weeks for $5.6 million. Others have reportedly deployed 10,000 or more GPUs, ...
Scale AI CEO Alexandr Wang said during an interview with CNBC on Thursday, without providing evidence, that DeepSeek has 50,000 Nvidia H100 chips, which he claimed would not be disclosed because ...
Singapore authorities investigate U.S. servers used in a fraud case, potentially containing Nvidia chips, with concerns over China-related AI chip smuggling. Dell and Super Micro supplied the hardware ...
DeepSeek is reportedly supporting China's military and found ways around U.S. export restrictions on advanced semiconductor chips.
More specifically, DeepSeek claims it trained its V3 (chat) AI model using a cluster of 2,048 Nvidia H800 GPUs, which cost approximately $5.5 million.