News
“DGX Station A100 brings AI out of the data center with a server-class system that can plug in anywhere,” said Charlie Boyle, vice president and general manager of DGX systems at NVIDIA ...
Today NVIDIA unveiled the NVIDIA DGX A100 AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform.
Nvidia notes that this means nearly 500 times more memory than in a single Nvidia DGX A100 system. Let’s circle back to the 1 exaflop figure and break it down a little.
Nvidia isn’t just supplying other companies with equipment though – it’s also announced plans to build its own DGX GH200-based supercomputer named Helios.
This is not the first workstation Nvidia launched; it famously partnered with AMD to launch the precursor to the 2025 DGX Station called the DGX Station A100.
NVIDIA says it created a supercomputer designed to help build generative AI models. The architecture of the DGX GH200 enables hundreds of powerful chips to act as a single GPU.
It’s called Nvidia DGX Cloud, and as the name suggests, it’s a cloud-hosted version of its popular DGX platform, which has become the enterprise standard for AI training.
This provides 1 exaflop of performance and 144 terabytes of shared memory — about 500x more memory than the previous generation NVIDIA DGX A100, which was introduced in 2020, the company added.
This system, Nvidia’s DGX A100, has a suggested price of nearly $200,000, although it comes with the chips needed.
Microsoft Azure, Google Cloud, and Oracle will let Nvidia sell you access to their A100 and H100 GPUs so you can train large language models.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results