AMD’s Next-Gen RDNA 3 GPUs to Feature Enhanced Ray-Tracing Capabilities Using A “Hybrid” Approach

AMD and NVIDIA are the only two companies in the world right now capable of producing industry-standard GPU solutions at competitive prices. Intel is trying to break into that market, but that’s about it. There is a reason why no one else even tries to dabble in this field, the sheer R&D cost of such an endeavor would bankrupt most firms. The amount of prior experience mixed with world-class expertise and commitment needed to manufacture a half-decent product would mentally and physically burn out the engineers. 

NVIDIA’s RTX head start

While AMD and NVIDIA both dominate this industry, their approaches to do so are starkly different from one another. The main thing that separates the two is the underlying architectures powering their latest and greatest GPUs. Both have their benefits and drawbacks. AMD is generally considered the budget gamer’s choice given their cheaper and better value-oriented options. 

One thing NVIDIA is indisputably better at is ray tracing. Ray tracing is not a new concept, it’s been utilized in the film and media industry to render CGI and VFX for decades now. However, it only made its way to mainstream consumer GPUs a few years ago with NVIDIA’s RTX 20-series. NVIDIA banked on having “RTX” in their graphics card as a unique selling point, despite the first generation of their tech being mostly lackluster. 

Then, with Ampere, NVIDIA built upon the foundation of RTX and improved it considerably thanks to its second-gen RT Cores. The RTX 30-series had great ray tracing performance all around and with an assist from DLSS, games never looked better. 

NVIDIA GeForce RTX 30-series GPUs | NVIDIA

AMD, on the other hand, introduced hardware-accelerated, aka true, ray tracing on their GPUs this generation with their Radeon RX 7000 GPUs. Therefore, the company was chasing a 2-year head start that put NVIDIA in front. Regardless, RDNA 2 ray tracing was nothing to boast about as it delivered results similar to NVIDIA’s first generation of RTX.

Now, with both companies ready to battle it out once again with their next-generation of GPUs, the ray tracing argument has ignited once again. Can RDNA 3 match NVIDIA’s ray tracing prowess? Does it even need to? Overclock3D took a deep dive into information from AMD’s recent Financial Analyst Day 2022 and answered exactly that. 

RDNA 3 and improved ray-tracing

AMD held its Financial Analyst Day on June 10th this year. While every media publication, including us, has already covered the presentation to death, some things were apparently overlooked. In the wake of big highlights, we missed the more nuanced hints at future releases, and that’s where AMD’s answer to NVIDIA’s massively-superior ray-tracing lies.

Before we look at that, let’s quickly recap some of the major features AMD is highlighting for RDNA 3:

  • 5nm Process Node
  • Advanced Chiplet Packaging
  • Rearchitected Compute Unit
  • Optimized Graphics Pipeline
  • Next-Gen AMD Infinity Cache
  • >50% Perf/Watt vs RDNA 2

Now that we’re up to speed, David Wang, Senior Vice President of Radeon at AMD, was quoted talking about RDNA 3 where he mentioned some other improvements the architecture would bring over RDNA 2. In hindsight, there are a couple of takeaways that only reveal themselves now from his statement. Take a look at this excerpt:

It (RDNA3) is also our first gaming GPU architecture that will leverage the enhanced 5nm process and an advanced chip packaging technology. And another innovation includes rearchitected Compute Units with enhanced ray-tracing capabilities and an optimized graphics pipeline with even faster clock speeds and improved power efficiency.

And to bring more photorealistic effects into the domain of real-time gaming, we are developing hybrid approaches that takes the performance of the rasterization combined with the visual fidelity of raytracing, to deliver the best real-time immersive experiences without comprising performance.

As you can see above, David Wang mentions how RDNA 3 includes rearchitected Compute Units (CUs) that are engineered to feature “enhanced ray-tracing capabilities“. While we obviously know that RDNA 3 will house rearchitected CUs since it’s literally a new architecture, everyone missed the point about ray-tracing.

AMD didn’t plaster this point on their website or their press material, that’s why no one seemed to care about it initially. However, we don’t really know what these enhanced capabilities really are, the best guess for them is the new features the RDNA 3 architecture brings, which combine to deliver better ray-tracing results.

Moreover, RDNA 3 supposedly bringing forth an “optimized graphics pipeline” that enables even faster clock speeds and improved power efficiency. 

That means each Compute Unit will be able to complete more cycles over a period of time, the more cycles it runs through, the higher the clock speed, the higher the clock speed, the better the performance, and… you get the point. It’s a domino effect of greatness, leading to better performance all around. Since those CUs are now working harder, less of them are required to tackle every task which results in a major boost in efficiency. 

AMD RDNA 2 Compute Unit, which will soon be dwarfed by the massively-superior RDNA 3 Compute Unit | AMD

AMD already enjoys a minor lead in this area as their current-gen Radeon RX 6000 cards reach clock speeds of nearly 3GHz when pushed to absolute limits. Now, with an improved architecture and a more advanced 5nm process node from TSMC, we can expect RDNA 3 GPUs to easily break past that 3GHz barrier.

That being said, NVIDIA’s next-gen RTX 40-series GPUs are also expected to operate around that 3GHz frequency, so this isn’t just a nice flex for AMD, it’s a necessity to keep the clock speeds high to remain competitive. NVIDIA is using a more advanced 5nm node, called “N4” for their Ada Lovelace GPUs, so that already nets them the upper hand there.

Lastly, it’s important to talk about that “hybrid approaches” comment pertaining to ray-tracing. AMD sees ray tracing a bit differently than NVIDIA. Usually, in a ray-traced game environment, everything you see on screen is ray-traced. That requires a metric ton of graphical prowess and is very taxing on the GPU, but ultimately ends up delivering a beautiful image.

Ray-tracing on vs. off in Battlefield V | Cycu1

AMD, on the other hand, will take a hybrid approach to tackle this issue. Instead of the entire scene being ray-traced, the company will use traditional rasterization and ray-tracing in tandem. That means certain reflections and some light are ray-traced but the rest of the scene is rendered normally, which results in improved performance. 

In this way, you get the best of both worlds. A ray-traced image which looks incredibly photorealistic, and superior performance that doesn’t tank the frames. In simple words, AMD will ray trace where it matters and where it would actually make a difference.

 RDNA 3 vs. NVIDIA Ada Lovelace

With so many factors working in AMD’s favor this time around, the battle between next-gen GPUs is about to be the toughest we’ve seen yet. An upgraded graphics pipeline, advanced packaging techniques, RDNA 3 architecture bringing a 50% performance-per-watt improvement, and next-gen Infinity Cache, all combine together to create the best GPU the Red Team has ever created, literally.

AMD RDNA 3 features | AMD

A flagship Navi 3X GPU is reportedly in the works for 2023 and it’s said to be an absolute behemoth in terms of overall performance. We don’t know if AMD will keep it restricted to its Radeon Pro workstation offerings, or position it as a 40-series RTX TITAN competitor remains yet to be seen. But if it does make its way to the gaming segment, it will be AMD’s premier graphics card, one that represents what Radeon is truly capable of. 

All in all, a lot is riding on RDNA 3 at this point. Both NVIDIA and AMD’s next-gen architectures have been hyped to oblivion but it’s AMD to have to prove themselves once and for all. If RDNA 3 prevails in the face of Ada Lovelace and Intel’s Arc A-Series, no one would hesitate from crowning it as the definitive GPU champ this time around.


Huzaifa Haroon

Born and raised around computers, Huzaifa is an avid gamer and a keyboard enthusiast. When he's not solving the mysteries of technology, you can find him scrutinizing writers, striving to inform the curious.
Back to top button