News

Jensen provided the update during a livestream posted by Taiwan's Formosa TV News network, so it looks like NVIDIA will be jumping from the Hopper AI GPU architecture to Blackwell for China.
Researchers across Taiwan are tackling complex challenges in AI development, climate science and quantum computing. Their work will soon be boosted by a new supercomputer at Taiwan’s National ...
“Nvidia DGX Cloud Lepton connects our network of global GPU cloud providers with AI developers,” said Jensen Huang, founder and chief executive of Nvidia. “Together with our NCPs, we’re ...
The Lepton software means that "the global GPU supply is intelligent and connected, delivering a virtual global AI factory at a planetary scale," said Alexis Bjorlin, head of Nvidia's DGX Cloud ...
Rumor mill: Nvidia is reportedly set to make a move many ... The claim states that up to 30% of RTX 5000 GPU production in China will be cut as Team Green focuses on its biggest money-maker ...
The service, called DGX Cloud Lepton, is designed to link artificial intelligence developers with Nvidia’s network of cloud providers, which provide access to its graphics processing units ...
TL;DR: NVIDIA plans to launch a special edition Blackwell AI GPU for China, possibly named 6000D or B40, featuring GDDR7 memory with 1.7TB/sec bandwidth and NVLink speeds of 550GB/sec. Expected to ...
Deutsche Telekom is now offering NVIDIA H100 Tensor Core processors for rent from the Open Telekom Cloud. The processors ...
This is the first time Nvidia has actively excluded almost all independent reviews and voices from participating in the launch narrative of a mass-market GPU. This strategy has troubling ...
Now in its fifth generation, NVLink supports up to 1.8 TB/s of bi-directional bandwidth per GPU, supporting up ... By combining this broad partner network with Nvidia technology, NVLink fusion ...
In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training.