Nvidia has shared a new set of benchmarks for the much talked about AI-accelerated technology. The exact tech is behind the company’s notorious Turing architecture.
Nvidia showcased the performance gains from using DLSS powered 4K rendering with amazing real-time graphics. The data shows a considerable gain over the usual 4K rendering + TAA (Temporal Anti-Aliasing).
Nvidia used i9-7900X 3.3Ghz CPU with 16GB of DDR4 Corsair memory sticks, Windows 10 64-bit, and Nvidia driver versions 416.25. The company revealed amazing performance gains when you compare Turing architecture and the power of DLSS.
Thanks to DLSS, the upcoming Nvidia RTX 2070 comfortably beats GTX 1070 by double. Under these conditions, the powerhouse known as RTX 2080Ti beats GTX Titan XP with a 41% performance gain.
NVIDIA CEO Jensen Huang believes that if they create a neural network architecture and an AI that create certain types of pixels we can run it on 114Tflops Tensor Cores.
“If we can create a neural network architecture and an AI that can infer and can imagine certain types of pixels, we can run that on our 114 teraflops of Tensor Cores, and as a result increase performance while generating beautiful images.”
Well, we’ve done so with Turing with computer graphics.
As a result, with the combination of our 114 teraflops of Tensor Core performance and 15 teraflops of programmable shader performance, we’re able to generate incredible results.
Nvidia posted on its official blog and shared its Lunar Landing running with an RTX Real-Time Ray Tracing technology. It allows the company to simulate never before seen light physics at a much cheaper cost and reduced development time.
Deep learning Super Sampling or DLSS allows turing to create pixels with shaders but imagine some of them with AI, effectively reducing the workload and horsepower needed to create the entire scene from scratch.