Samsung showcased what their GDDR7 memory offered a while back. Today, thanks to Ian Cutress, we have come across a slide that gives us more information about Samsung’s upcoming G7 memory.
36Gbps Speed & PAM3 Signaling
The slide confirms that Samsung’s GDDR7 memory will firstly, have 36Gbps of bandwidth. These GDDR7 modules run at 36Gbps which over a 384-bit bus rate amounts up to a massive 1.728TB/s of effective bandwidth. This is ~70% more than the RTX 4090. This new GDDR7 memory is aimed at the data center, HPC, mobile, gaming, and automotive market segments.
That’s almost 2x more than what AMD’s RDNA2 GPUs used. The title for the world’s fastest memory modules belongs to none other than Samsung with its 24Gbps G6 modules. These new models are much faster than GDDR6X, which is exclusive to NVIDIA and is made by Micron.
The second important characteristic is PAM3 signaling. GDDR6X memory by Micron, on the other hand, uses PAM4 signaling. PAM3 uses 3 distinct levels, which lead to 3 different values; -1, 0, and 1. Using a certain threshold, the signal’s value can be detected. Similarly, this new memory offers 25% better efficiency than NRZ (Non-Return to Zero) which is used in GDDR6.
While we do not have a clear date for G7’s release, don’t get your hopes too high. GDDR7 is extremely fast and possibly too fast for the industry at the moment. NVIDIA’s RTX 4000 and AMD’s RX 7000 GPUs already use G6/G6X memory so GDDR7 can only become a real solution if either of these companies is planning for a refresh.
It is hard to say if Ada and RDNA3 refreshes could leverage this technology. NVIDIA has already taken the performance crown and AMD has no interest in matching NVIDIA, performance-wise. AMD is aiming to take hold of NVIDIA’s market share because your typical Monolithic chip cannot compete against an MCM design in the lower-mid-ranged segment.
NVIDIA most probably won’t launch Ada with G7 because we suspect that they are in a contract with Micron. AMD, on the other hand, could do so but it would be pointless to spend a huge amount of cash on a Big Navi GPU to beat NVIDIA on all fronts. That simply is too costly for AMD and the consumer.