Enthusiasts Have Mixed Feelings Regarding AMD’s New Instinct GPUs

AMD Instinct is AMD’s brand of professional GPUs. They are often used for heavy workloads such as AI, Machine Learning and act as a counterpart to NVIDIA’s data-center GPUs. 

AMD released the Instinct MI250 GPU back in 2021. Being a relatively expensive and a not so commonly used GPU, it never really had many benchmarks. Recently, ProjectPhysX on Twitter shared her views after conducting a few tests on this GPU and her thoughts were, rather intriguing.

In a series of tweets (a thread), she gave her opinion regarding the performance one can expect with along with some question marks.

AMD Instinct MI250 | ProjectPhysX

The MI250 has a misleading description from AMD’s side. This GPU has been marketed as a ‘single chiplet GPU‘, however, there are two GCDs (Graphics Complex Dies) with two GPUs. Meaning that it is a MCM (Multi Chip Module) chip. The 128 GB of memory featured on the GPU is split across the two GCDs.

Well then what’s wrong with that? The problem is, one GPU cannot directly access the data on the other GPU. Take it as AMD’s attempt at SLI, but in Machine Learning. Many algorithms and softwares are not fine-tuned to work with multiple GPUs meaning that the 128GB of memory is often just 64GB in disguise. Some algorithms may run even slower because of a ‘software bottleneck‘.

ProjectPhysX then goes onto say that this the MI200 looks decent on paper sometimes even faster than the A100 from NVIDIA. However, in bandwidth-bound applications and tests such as lattice Boltzmann, this GPU fails to deliver performance on-par with NVIDIA’s A100. The reason being that it is as inefficient as NVIDIA’s Kepler architecture which, mind you is about 10 years old.

The chart may seem large, however, our primary focus should be on the NVIDIA A100 which has much higher performance scores as compared to the Instinct MI200.

A comparison of AMD’s Instinct GPUs with other architectures | ProjectPhysX
 

Not all hope is lost as this is still a major improvement over the last generation (MI100) featuring two times more memory (32GB vs 64GB on the MI200). Adding on to that, the node is also considerably better having 8 GPUs in 4 sockets with fast inter-connect. 

AMD is aiming at data centers offering them a high speed and cost effective solution for computing, although NVIDIA still poses a real threat due to their upcoming Hopper architecture for AI workloads.

You can read more about the MI250 here.

Abdullah Faisal
With a love for computers since the age of give, Abdullah has always sought to delve into the depths of information, and uses it as his guiding light. He believes success is of utmost importance as history is written by the victor.