NVIDIA Elaborates on New DLSS Technique in Control

- Advertisement -

NVIDIA recently published a blog post where they showcased the methods they used to mold the DLSS for Control:

During our research, we found that certain temporal artifacts can be used to infer details in an image. Imagine, an artifact we’d normally classify as a “bug,” actually being used to fill in lost image details. With this insight, we started working on a new AI research model that used these artifacts to recreate details that would otherwise be lost from the final frame.

This AI research model has made tremendous progress and produces very high image quality. However, we have work to do to optimize the model’s performance before bringing it to a shipping game.

- Advertisement -

Leveraging this AI research, we developed a new image processing algorithm that approximated our AI research model and fit within our performance budget. This image processing approach to DLSS is integrated into Control, and it delivers up to 75% faster frame rates.

Below is a look at the DLSS in Control in action. Both sides are rendered in 720p and outputted at 1080p:

However, NVIDIA does yield on the fact that the image processing falls short in handling certain types of motion with an example of native 1080p vs 1080p DLSS in Control. As you can see below, the flames are not as well defined in DLSS as they are in the native resolution:

- Advertisement -

The company also states that they hope to further their AI research by utilizing deep learning:

Deep learning-based super resolution learns from tens of thousands of beautifully rendered sequences of images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Deep neural networks are then trained to recognize what beautiful images look like. Then these networks reconstruct them from lower-resolution, lower sample count images. The neural networks integrate incomplete information from lower resolution frames to create a smooth, sharp video, without ringing, or temporal artifacts like twinkling and ghosting.

- Advertisement -

Let’s look at an example of our image processing algorithm vs. our AI research model. The video below shows a cropped Unreal Engine 4 scene of a forest fire with moving flames and embers. Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.

With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.

Further Reading:

- Advertisement -

Leave a Reply

Related posts

Latest posts

Exynos 2100 Sucessor with AMD GPU Under Development; Expected to Debut Later This Year

This month, Samsung introduced its latest flagship chipset, the Exynos 2100, a successor to last year's Exynos 1080. It was quite impressive on paper,...

Youtube Now Available as a Progressive Web App

After the debut of YouTube TV and Music as a Progressive Web App (PWA), Google is finally rolling out the PWA version of the...

The Battle of Galaxy S21′ – Exynos 2100 Lags Behind Snapdragon 888

Since its inception, Samsung's Exynos chipsets gained a pretty terrible reputation in the silicon industry for being the slowest one out there. It was...

Next Article Loading