NVIDIA DLSS 3.0 Frame Generation Modded To Work With RTX 2000 & RTX 3000 GPUs
NVIDIA when announcing their Ada Lovelace architecture, emphasized on how DLSS 3.0 will be a massive leap as compared to DLSS 2.0. The performance uplifts are promising as shown by reviewers. Although, owing to a few ‘technical’ and ‘hardware’ features, DLSS 3.0 is exclusive to Ada Lovelace. This effectively locks out Ampere and Turing users from using the new frame generation technology featured in DLSS 3.0.
Modders To The Rescue
When DLSS 3.0 first arrived, many claimed that modders will be able to get the new ‘frame generation’ technology to work on pre-Lovelace GPUs. To our surprise, u/JusDax on Reddit shared a post claiming to have used DLSS 3.0 on the Turing based RTX 2070.
DLSS Frame Generation doesn’t seem to be hardware locked to RTX 40 series. I was able to bypass a software lock by adding a config file to remove the VRAM overhead in Cyberpunk. Doing this causes some instability and frame drops but I’m getting ~80 FPS on an RTX 2070 with the following settings:
2560×1440 res
HDR off
DLSS Balanced
DLSS Frame Generation ON
Ray Tracing Ultra preset
(I get ~35-40 FPS without Frame Generation and DLSS at Quality)
Edit: forgot to add some details
The user mentions that the RTX 2070 can offer 30-40FPS without ‘Frame Generation’ using DLSS Quality. On turning Frame Generation ON, the performance is boosted to 80FPS which is more than 2x higher. Not all is perfect as a few frame drops were faced, as expected.
Hold Your Horses
While this really does make some Turing and Ampere users happy, it probably won’t last for that long. NVIDIA may announce a new driver within a few days, fixing these workarounds.
However, this does show that DLSS 3.0 is limited to Lovelace just for stability purposes. Essentially, a few driver optimizations by NVIDIA and game developers and voilà, you just breathed a few more years into your RTX 2000/3000 GPU.
SKU | Chip | FP32/CUDA | Max Clock | Cache | Memory Bus | VRAM | Memory Spec | Speed (Gbps) | TDP | |||
RTX 4000 Titan? | AD102-450 | 18432 | 3.0GHz+? | 96MB? | 384/382-bit | 48GB | GDDR6X | 24 | ~800W | |||
RTX 4090 Ti | AD102-350 | 18176 | 3.0 GHz? | 96MB | 382-bit | 24GB | GDDR6X | 24 | 475W+ | |||
RTX 4090 | AD102-300-A1 | 16384 | 2.52GHz | 96MB | 384-bit | 24GB | GDDR6X | 21 | 450W+ (TGP) / 660W (Max TGP) | |||
RTX 4080 Ti | AD102 | 14848? | 2.7 GHz? | 80MB? | 320-bit | 20GB | GDDR6X | 23 | 420W | |||
RTX 4080 (Variant 1) | AD103-300-A1 | 9728 | 2.505GHz | 64MB | 256-bit | 16GB | GDDR6X | 22.5 | 320W(TGP)/ 516W (Max TGP) | |||
RTX 4080 (Variant 2) | AD104-400-A1 | 7680 | 2.61GHz | 48MB | 192-bit | 12GB | GDDR6X | 21 | 285W (TGP) /366W (Max TGP) | |||
RTX 4070 Ti | AD104-300? | 7680 | 3.0GHz? | ? | 192-bit | 12GB | GDDR6X | 21 | 300W? | |||
RTX 4070 | AD104-275? | 7168 | ? | ? | 160-bit | 10GB | GDDR6X | 21 | 250W |