NVIDIA Elaborates on New DLSS Technique in Control

spot_img

NVIDIA recently published a blog post where they showcased the methods they used to mold the DLSS for Control:

During our research, we found that certain temporal artifacts can be used to infer details in an image. Imagine, an artifact we’d normally classify as a “bug,” actually being used to fill in lost image details. With this insight, we started working on a new AI research model that used these artifacts to recreate details that would otherwise be lost from the final frame.

This AI research model has made tremendous progress and produces very high image quality. However, we have work to do to optimize the model’s performance before bringing it to a shipping game.

Leveraging this AI research, we developed a new image processing algorithm that approximated our AI research model and fit within our performance budget. This image processing approach to DLSS is integrated into Control, and it delivers up to 75% faster frame rates.

Below is a look at the DLSS in Control in action. Both sides are rendered in 720p and outputted at 1080p:

However, NVIDIA does yield on the fact that the image processing falls short in handling certain types of motion with an example of native 1080p vs 1080p DLSS in Control. As you can see below, the flames are not as well defined in DLSS as they are in the native resolution:

The company also states that they hope to further their AI research by utilizing deep learning:

Deep learning-based super resolution learns from tens of thousands of beautifully rendered sequences of images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Deep neural networks are then trained to recognize what beautiful images look like. Then these networks reconstruct them from lower-resolution, lower sample count images. The neural networks integrate incomplete information from lower resolution frames to create a smooth, sharp video, without ringing, or temporal artifacts like twinkling and ghosting.

Let’s look at an example of our image processing algorithm vs. our AI research model. The video below shows a cropped Unreal Engine 4 scene of a forest fire with moving flames and embers. Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.

With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.

Further Reading:

Leave a Reply

Latest posts

Samsung’s One UI 4.0 Beta Based on Android 12 Coming to S21 Series in August

Samsung's One UI has been tipped for an update and will be called One UI 4.0. Tipster says that the Android 12 based skin...

Poco X3 GT Debuts with Dimensity 1100 Chipset

Poco is back with yet another rebranded Redmi smartphone, dubbed Poco X3 GT. The X3 GT is the third device in the X3 series...

New Plans for Disney+ Hotstar Announced – Starts at Rs. 499

Disney+ Hotstar, a Disney owned streaming service, has announced its new pricing tiers, which will be under effect starting September 1. The Disney owned...
Advertisment

Loading Next Article