On Thursday, AMD introduced its new MI325X AI accelerator chip, which is ready to roll out to information middle shoppers within the fourth quarter of this 12 months. At an match hosted in San Francisco, the corporate claimed the brand new chip provides “industry-leading” efficiency in comparison to Nvidia’s present H200 GPUs, that are broadly utilized in information facilities to energy AI packages similar to ChatGPT.
With its new chip, AMD hopes to slim the efficiency hole with Nvidia within the AI processor marketplace. The Santa Clara-based corporate additionally printed plans for its next-generation MI350 chip, which is located as a head-to-head competitor of Nvidia’s new Blackwell machine, with an anticipated transport date in the second one 1/2 of 2025.
In an interview with the Monetary Instances, AMD CEO Lisa Su expressed her ambition for AMD to transform the “end-to-end” AI chief over the following decade. “That is the start, now not the top of the AI race,” she advised the newsletter.
The AMD Intuition MI325X Accelerator.
The AMD Intuition MI325X Accelerator.
Credit score:
AMD
In step with AMD’s website online, the introduced MI325X accelerator incorporates 153 billion transistors and is constructed at the CDNA3 GPU structure the use of TSMC’s 5 nm and six nm FinFET lithography processes. The chip comprises 19,456 circulation processors and 1,216 matrix cores unfold throughout 304 compute gadgets. With a height engine clock of 2100 MHz, the MI325X delivers as much as 2.61 PFLOPs of height eight-bit precision (FP8) efficiency. For half-precision (FP16) operations, it reaches 1.3 PFLOPs.