How NVIDIA Plans on Doing Driver-Based Frame Generation
- NVIDIA and AMD enhance gaming via Frame Generation, which creates additional frames to smooth gameplay. NVIDIA uses complex methods tied to specific hardware, while AMD’s simpler AFMF works driver-based without game developer input.
- It’s unclear if NVIDIA will adopt a driver-based Frame Generation like AMD's AFMF. Their existing technology relies heavily on newer GPUs and combines several methods for better quality, hinting at possible reluctance to shift strategies.
- Both methods improve frame rates but can introduce visual errors or lag. AMD’s AFMF is more versatile but potentially less refined, while NVIDIA’s approach prioritizes fidelity and may evolve with new GPU releases.
We all at this point have probably heard of the new gaming buzzword, thrown here and there by both NVIDIA and AMD, ‘Frame Generation‘. Hailed as the future of gaming by both critics and proponents alike, this technology does need some ironing before it becomes the ‘Go To‘ solution for increasing your performance.
With AMD’s AFMF setting a new standard for a simple ‘Click and Play‘ frame generation utility, will NVIDIA too follow AMD’s steps and implement a universal, driver-based frame generation solution? Let’s find out.
Table of Contents
Will NVIDIA Make a Driver-Based Frame Generation Solution?
Before we talk about the topic at hand, let’s go over how Upscaling and Frame Generation work and a few important-to-understand terminologies.
DLSS stands for Deep Learning Super Sampling and was introduced by NVIDIA in 2018 as a part of its RTX lineup of GPUs. Put simply, DLSS scans and collects data from the past frames and uses this data to fill in the missing pixels in the new frame. With a hint of AI, you receive an upscaled image with enhanced performance.
AMD’s FSR or Fidelity FX Super Resolution on the contrary can work on almost every GPU. It uses a special open-source spatial upscaling algorithm to upscale the game from a lower to a higher resolution. It is far more applicable than DLSS, since it is not hardware-dependent.
READ MORE: NVIDIA DLSS 3.0 Frame Generation Modded To Work With RTX 2000 & RTX 3000 GPUs ➜
A Few Important Terms and an Introduction to Frame Generation
With the RTX 40 lineup came along DLSS 3.0 and NVIDIA Frame Generation. What exactly is the meaning of ‘generating frames’? Frame Generation is at its simplest, inserting a frame within two other frames. This process is called Interpolation. It has been a popular technique with TVs to increase the framerate of the displayed content to match the TV’s refresh rate.
Upscaling simply improves the image quality and increases performance. Meanwhile, Frame Generation or FG for short creates an entirely new frame within two existing frames, displays them, and guesses the next frame, all the while managing an acceptable latency.
This tech-wizardry requires the understanding of 2 core concepts in this field: Optical Flow Analysis and Motion Vectors. Let’s go over them one by one:
1) Optical Flow Analysis
This technique takes a series of images and estimates what the next iteration of the image could look like. This estimation is done by analyzing the velocity, color, and brightness of objects in the image. Instead of a clear-cut result, you obtain a mere approximation.
Consider a ball rolling on the floor towards the right. The ball reaches its destination in 5 discrete steps. Assume its motion to be almost linear and in a straight line and you could determine its final destination beforehand. If you’ve studied Analytical Geometry, you must have come across the 2-point form:
y - y1 = (x-x1)(y2-y1) / (x2-x1)
From just two points, we can determine the equation of the given line. Since the slope is constant, predicting the next position of the image/point is not tricky. Do note that the actual technology is much more complicated than this.
Most importantly, Optical Flow Analysis does not require access to the game engine’s data. It can be implemented on the driver level, wherein the driver interpolates (inserts) frames by predicting motion on the screen. In simple terms, Optical Flow Analysis does not interact with the game engine.
The disadvantage is that the driver is unable to distinguish between UI and the actual 3D environment, so unforeseen artifacts may occur. Moreover, the output will never be as refined since it uses the raw frame data.
2) Game Engine Motion Vectors
Motion vectors are directional vectors that indicate the displacement of objects from one frame to another. They are a game engine component and represent objects’ transformation and animation. This is a more robust way of representing motion than simply ‘guessing’ the object’s position.
If you were wondering, motion vectors are a crucial part of Frame Generation. Only that there needs to be some correspondence with the driver and the game engine to share this data. Achieving this correspondence requires implementation from the game developers.
Therefore, while being a superior option to Optical Flow Analysis, motion vectors are greatly limited in their scope and scalability since not every game will have this option. If you were to play an old title from the 2010s, motion vector-powered FG solutions (DLSS FG, FSR 3 FG) are very likely not to be present.
READ MORE: What is Resizable BAR & Should You Enable it? (Guide) ➜
DLSS Frame Generation vs FSR Frame Generation
Now let’s see how the current frame generation tech from both Team Green and Team Red stack up against each other.
↪ NVIDIA:
NVIDIA’s version of Frame Generation uses a combination of motion vectors and optical flow analysis. The Optical Flow Accelerator with RTX 40 GPUs analyzes two frames. It then determines the information the game’s motion vectors do not include, such as particles, reflections, shadows etcetera. In the example below, Optical Flow Analysis allows for accurate tracking of the shadow.
The Motion Vector part on the contrary tracks the 3D environment and the scene geometry. In the same bike example, motion vectors can pinpoint the accurate future position of the biker or the road mathematically. However, they cannot determine the shadow for which Optical Flow Analysis is used. Thus, these two must work in tandem to produce an accurate image.
- Requires game developer support
- Uses a combination of Optical Flow Analysis and Motion Vectors
- Is hardware locked on RTX 40 GPUs
- Cannot be implemented on the driver-level
↪ AMD:
FSR 3 Frame Generation is very similar to NVIDIA’s DLSS Frame Generation. Apart from a few naming changes, this technology also incorporates a combination of Optical Flow Analysis and Motion Vectors. For reasons obvious, both technologies require support from game developers.
- Requires game developer support
- Uses a combination of Optical Flow Analysis and Motion Vectors
- Can be enabled on RX 5000/6000 and RTX 20/30/40 Series GPUs
- Cannot be implemented on the driver-level
READ MORE: What is a Teraflop? Understanding The New Graphics Buzzword ➜
The Anomaly: AMD Fluid Motion Frames (AFMF)
The closest thing we have to a driver-level frame generation utility is AMD Fluid Motion Frames. AFMF does not interact with the game in any way whatsoever. It is purely based on Optical Flow Analysis, or a prediction of the next frame.
This allows it to work with almost every DirectX 11/12 title since no software support is required. AFMF does not actually increase the framerate, rather it makes the game appear smoother as you will see more frames on your screen.
So, what’s the catch? Well, AFMF is not even close to matching DLSS FG or FSR FG. Moreover, the glitches introduced in the UI are also something users have to tackle with. However, since it works on the hardware level, it is a pretty neat option if you are willing to deal with the errors that may occur.
- Does not require game developer support
- Uses only Optical Flow Analysis
- Only supports RDNA2 or newer GPUs
- Works purely on the driver level
READ MORE: AMD is Preparing its Own Video Upscaler for YouTube and VLC ➜
Will NVIDIA Respond to AFMF?
The clear answer is ‘We don’t know‘, because NVIDIA is still yet to make any statement regarding such an option. AMD’s Aaron Steinman asserts that their move will push NVIDIA to develop something like AFMF soon.
“I would be curious to know if Nvidia feels now they have to match what we’ve done in making some of these solutions driver-based”
Aaron Steinman, to PC Gamer
It’s quite interesting how we haven’t seen NVIDIA’s version of AFMF since it was once proudly said that the RTX 40 lineup has a better Optical Flow Accelerator as compared to Ampere (RTX 30). This was the very reason, why NVIDIA FG (Using Optical Flow + Motion Vectors) can only be enabled on Ada Lovelace.
On the other hand, NVIDIA may need to train another model from scratch for this project, which requires significant overhead.
↪ Is it Even Viable?
Frame Generation using both Optical Flow Analysis and Motion Vectors isn’t flawless. The input latency introduced by these features makes them highly unappealing to gamers. On the flip side, the driver-based AFMF isn’t any better as it eats away at your FPS due to interpolation, while being inconsistent and prone to graphical glitches and bugs.
A better question would be, ‘Shouldn’t NVIDIA improve DLSS Frame Generation first?’. Given that AI tends to improve with time, the latency issue should be resolved, to a great extent. Unless NVIDIA is strictly bound by its data models, making a driver-level FG solution shouldn’t be tricky, but is it worth the hassle?
↪ Would it be Pointless?
Hypothetically, where would you use NVIDIA’s variant of AFMF? NVIDIA would probably restrict it to RTX 40 or higher GPUs, which by themselves are powerful enough to run old titles at great framerates. Newer games more often than not, do feature DLSS 3 FG / FSR 3 FG support.
Mobile GPUs are a different case, but a driver-level approach would be very useful for older architectures such as Pascal, Maxwell, Kepler and Fermi. Sadly, due to technical reasons, it would be almost impossible to extend this feature to those GPUs. They lack Tensor Cores, meanwhile, CUDA can only do so much. Even AFMF only works with RDNA2 or newer architectures.
↪ Hopes for RTX 50 ‘Blackwell’
As per rumors, the RTX 50 lineup will debut in Q4 2024. Expect a large number of announcements, which may or may not include a universal Frame Generation solution. Despite the expected hardware lock to the RTX 40/50 lineup, it is not like NVIDIA to get left behind in any department. This is just pure speculation, however, and no rumors indicate such plans from NVIDIA.
READ MORE: HDMI 2.1 vs DisplayPort 1.4 – Which is Better in 2024? ➜
Conclusion
While AI has seen unprecedented levels of growth, you need adequate hardware for AI Inference and Acceleration. If your GPU’s cores aren’t fast enough, it is simply not possible to interpolate frames in real time. You can offload this processing to a server, but that’d require a subscription and abysmal latency numbers. It is pertinent for GPU manufacturers to strike a balance. What are your thoughts on this matter? Tell us in the comments.
FAQs
In simple terms, driver-based frame generation is a universal solution to generate frames within frames without requiring support from game developers.
Most likely, yes. Since DLSS 3 FG requires RTX 40 series GPUs due to the new ‘Optical Flow Accelerator’ in them, expect official driver-based solutions to see similar requirements.
There are no concrete rumors regarding this matter. Only time will tell since most of us were unaware of Frame Generation before NVIDIA announced it 2 years back.