• Home
  • PC

    NVIDIA Elaborates on New DLSS Technique in Control

    - Advertisement -

    NVIDIA recently published a blog post where they showcased the methods they used to mold the DLSS for Control:

    During our research, we found that certain temporal artifacts can be used to infer details in an image. Imagine, an artifact we’d normally classify as a “bug,” actually being used to fill in lost image details. With this insight, we started working on a new AI research model that used these artifacts to recreate details that would otherwise be lost from the final frame.

    - Advertisement -

    This AI research model has made tremendous progress and produces very high image quality. However, we have work to do to optimize the model’s performance before bringing it to a shipping game.

    Leveraging this AI research, we developed a new image processing algorithm that approximated our AI research model and fit within our performance budget. This image processing approach to DLSS is integrated into Control, and it delivers up to 75% faster frame rates.

    Below is a look at the DLSS in Control in action. Both sides are rendered in 720p and outputted at 1080p:

    However, NVIDIA does yield on the fact that the image processing falls short in handling certain types of motion with an example of native 1080p vs 1080p DLSS in Control. As you can see below, the flames are not as well defined in DLSS as they are in the native resolution:

    The company also states that they hope to further their AI research by utilizing deep learning:

    Deep learning-based super resolution learns from tens of thousands of beautifully rendered sequences of images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Deep neural networks are then trained to recognize what beautiful images look like. Then these networks reconstruct them from lower-resolution, lower sample count images. The neural networks integrate incomplete information from lower resolution frames to create a smooth, sharp video, without ringing, or temporal artifacts like twinkling and ghosting.

    Let’s look at an example of our image processing algorithm vs. our AI research model. The video below shows a cropped Unreal Engine 4 scene of a forest fire with moving flames and embers. Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.

    With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.

    Further Reading:

    - Advertisement -

    Recent Articles

    9 Cool Gadgets You Will Absolutely Love

    Gadgets, gadgets everywhere! With technology growing at an exponential rate and innovation creating surprises by the second, we are spoilt for choice...

    World Of Tanks Enables Ray-Tracing on AMD Graphics Cards via Intel’s OneAPI and Embree

    These days, everyone is talking about ray-tracing and RTX when it comes to graphics cards and PC gaming. However, all the games...

    SATA-IO Spills The Beans on Third Gen AMD Threadripper 39X0X Naming

    AMD's Ryzen 9 3950X already offers a previous-gen Threadripper number of cores and threads and it's slated to arrive soon. But for...

    Europe’s Fastest Supercomputer to Feature 750K AMD EPYC Rome Cores

    Just as the world was getting used to the idea of serious AMD competition in the server market, team Red's gone and...

    Best Gaming Laptops under Rs. 1 Lakh in India (2019)

    As technology advances, gaming laptops are becoming more and more popular. Where once iGPUs dominated most handheld notebooks, now we're seeing discrete...

    Related Stories

    Leave a Reply

    Stay on Top - Get the daily news in your inbox