Building a real-time 3D environment was considered a hard nut to crack but Nvidia has done it. The chip designer has taken the help of Artificial Intelligence (AI) in order to create a 3D gaming environments from the images and videos taken in real life. Nvidia today announced that developers can now render to the 3D environment with the help of real-world videos.
The new AI system unveiled by Nvidia showed that different elements on the streets can be recognized. Elements like parked cars, buildings, and sidewalks were all modeled with the help of AI. Things that appear in a real-time video can then be rendered into the 3D design. They can then be used in games like urban racing driving games. Rather than creating a new design, a render version of an urban city can be made with the AI system.
Researchers of Nvidia used neural technology in order to create the render 3D environment of the real world. This technology can turn out to be cheaper and less time-consuming compared with the current 3D render. Right now companies have to model each and every object individually in the virtual world. This is why Nvidia’s technology is a major breakthrough as it can create a 3D model from the real-time video or photos.
Vice President of Nvidia’s Applied Deep Learning Research Center Bryan Catanzaro said that it’s the first time they have managed to create interactive graphics with the help of the neural network. Bryan believes that neural networks will change the ways in which graphics are being created. Through the technology, developers will be able to create new scenes at a low cost.
For the render 3D graphics, researchers first collected the data from an open source dataset. The dataset, in this case, was the video footage which was broken down into different frames like buildings, trees, and others. Catanzaro explained that the structure of the world is created traditionally with AI generating graphics for it.