News

Intel's Computex 2025 event showcased Gaudi 3 AI accelerator and new ARC graphics cards, positioning it to gain share in AI ...
Intel's Gaudi 3 AI accelerator launches to simplify enterprise AI development, with 64 Tensor processors, 128 GB HBM, 96 MB SRAM and more. Skip to main content. Events Video Special Issues Jobs ...
Gaudi 3 vs. Nvidia. Intel wasted no time with AMD comparisons, preferring to tout their performance advantages vs. the Nvidia H100 and H200 for training and inference processing.
At Intel Vision 2024, Intel introduces the Intel Gaudi 3 accelerator and unveils new open systems, next-gen products and strategic collaborations.
Intel projects that Gaudi 3 will significantly outperform competing products like Nvidia's H100 and H200 in training speed, inference throughput, and power efficiency for various parameterized models.
Intel Gaudi 3. Intel’s latest advancements in AI infrastructure include two major updates to its data center portfolio. These include Intel Xeon6 with P-cores.
Intel says that its new Gaudi 3 AI accelerator offers up to 1856 BF16/FP8 matrix TFLOPS as well as up to 28.7 BF16 vector TFLOPS at around 600W TDP, which when compared to the NVIDIA H100 AI GPU ...
Intel said its strategy for Gaudi 3 accelerator chips will not focus on chasing the market for training massive AI models that has created seemingly unceasing demand for Nvidia’s GPUs, ...
Gaudi 3 has 40% faster time-to-train large language models versus NVIDIA’s H100 AI chip and is 50% faster inferencing on LLMs versus the NVIDIA H100, Intel said. Gaudi 3 may go head-to-head with ...
Intel Gaudi 3 promises 4x more AI compute and a 1.5x increase in memory bandwidth over its predecessor, the Gaudi 2, to allow efficient scaling to support large compute clusters and eliminate ...
Intel Corporation INTC recently inked a partnership with International Business Machines Corporation IBM to deploy its Gaudi 3 AI accelerators as a service in IBM Cloud. With this, IBM became the ...
Intel says Gaudi 3 can perform inference with up to 2.3 times the power efficiency of the H100, while some large language models take less time to train.