News

Its first NVIDIA DGX SuperPOD H100 system combines NVIDIA's top hardware, software, and networking technology to offer a scalable, efficient, and easily deployable solution.
Nvidia said the new Equinix Private AI service will help channel partners close DGX system deals “faster” because of how the solution addresses pain points with supercomputer deployments.
AMD was specifically comparing the MI300X to Nvidia's DGX H100 system, which will be making way for the higher-performance DGX GH200 system later in 2024.
Nvidia CEO Jensen Huang has confirmed that an upcoming iteration of the company's server family will be liquid cooled. Huang let slip the detail during a presentation at the 2024 SIEPR Economic Summit ...
NVIDIA says it created a supercomputer designed to help build generative AI models. The architecture of the DGX GH200 enables hundreds of powerful chips to act as a single GPU.
Nvidia describes Eos as a system that can power an "AI factory," as it's a very large-scale SuperPod DGX H100 system. The company says it is what allows it to develop its own AI breakthroughs and ...
Nvidia The significant computational power provided by a single DGX GH200 system makes it well-suited to advancing the training of sophisticated language models.
San Jose, March 22, 2022 — NVIDIA today announced the fourth-generation NVIDIA DGX system, which the company said is the first AI platform to be built with its new H100 Tensor Core GPUs. DGX H100 ...
This DGX H100 SuperPOD system had 32 DGX H100 systems, each with eight GPUs and a pair of “Sapphire Rapids” Xeon SP processors from Intel that did not have NVLink ports and therefore could not have ...
The new system is known as the Nvidia DGX GH200, and it will apparently be capable of a massive 1 exaflop of performance.
Each Grace Hopper unit combines a Grace CPU and an H100 Tensor Core GPU, and the DGX GH200 system is supposedly able to deliver one exaflop of compute performance as well as ten times more memory ...