A new article published by Facebook researchers ahead of SIGGRAPH 2020 talks about machine learning-based Neural Supersampling – it’s not too different from NVIDIA Deep Learning Super Sampling (DLSS) technology, but doesn’t require any proprietary hardware or software. provision.

At the same time, the results of the technology are quite impressive: as you can see from the examples of images, the quality is quite comparable to DLSS. “The closest thing to our work is the recently introduced technology NVIDIA Deep-Learned Supersampling (DLSS), which in real time improves the quality of rendering low-resolution content using a neural network “, – says the description

As the researchers note, their method is easy to integrate into modern game engines, it does not require special hardware (for example, for eye tracking) or software (for example, special DLSS drivers), which makes it applicable to a wider range of existing software platforms, hardware accelerators and displays.

“We found that for neurosupersampling, the additional ancillary information provided by the motion vectors was particularly effective. Motion vectors define geometric correspondences between pixels in successive frames. In other words, each motion vector points to a sub-pixel location where a surface point visible in one frame might have appeared in a previous frame. These values ​​are usually estimated by computer vision methods for photographic images, but such optical flux estimation algorithms are prone to errors. In contrast, the rendering pipeline can directly generate dense motion vectors, thereby providing robust and rich input to neurosupersampling applied to rendered content.

Our method builds on the aforementioned observations and combines additional ancillary information with a new spatio-temporal neural network design that aims to maximize image and video quality while providing real-time performance.

During decision making, our neural network takes as input the rendering attributes (color, depth map, and dense motion vectors per frame) of both the current frame and several previous frames rendered at low resolution. At the output, the neural network receives a high-resolution color image corresponding to the current frame. The network undergoes supervised learning: in the process, a high-resolution image obtained using full-screen anti-aliasing methods acts as a reference image – it is paired with each low-resolution input frame. “

Apparently, Facebook also mentioned the possible application of the Neural Supersampling method in augmented and virtual reality applications within its own Oculus platform. However, there is no reason why such a promising alternative to DLSS could not be used in regular games.

If you notice an error, select it with the mouse and press CTRL + ENTER.

By Alex

Alex Soojung-Kim Pang is a Silicon Valley-based consultant and writer. His latest book Rest: Why You Get More Done When You Work Less (Basic Books, 2016) and The Distraction Addiction (Little Brown, 2013) blend history, psychology, and neuroscience to explore the hidden role of leisure and mind-wandering in creative lives.

Leave a Reply

Your email address will not be published. Required fields are marked *