NVIDIA’s new RTX Turing GPUs are here and aside from their cutting edge ray-tracing capability, they also support a bunch of other technologies. From AI rendering to reconfiguring the graphics pipeline, the Turing architecture is altogether changing how graphics cards work. While some of these improvements are more nuanced than the rest, they are still going to have a significant impact on the image quality and/or performance. In this post, we’ll cover how the Turing based RTX cards affect VR gaming.
NVIDIA VRWorks Audio: Using Ray-tracing to Accelerate Acoustics
When ray-tracing is brought up, most of us think of it as a visual technology, and in most cases it is. Ray-tracing is used to improve the image quality many-fold by employing light rays to simulate how light travels in a scene. This in turn notably improves the shadows, ambient occlusion, reflections and all other effects that rely on light. But that is not all it can be used for.
NVIDIA is employing ray-tracing to enhance the acoustics of VR games and applications. Turing’s RT Cores can also simulate sound and enhance the 3D effect in VR based scenarios. Dubbed as VRWorks Audio, this effect is accelerated by 6x on the RTX cards compared to the old Pascal architecture. This should enhance VR immersion appreciably. You can watch the SIGGRAPH demo of VRWorks audio in the video below:
Better Eye-tracking and Animations
In addition to the RTCores, Turing also integrates NVIDIA’s Tensor Cores that are the basis of the company’s success in the fields of AI and neural networks. NVIDIA is leveraging AI for a new optimized super-sampling algorithm which should increase the image fidelity by a considerable margin.
In VR gaming, these Tensor cores should allow for better position and eye tracking as well as more fluid and realistic in-game animations.
Variable Rate Shading
This is something that has been discussed multiple times in the industry both by developers as well as hardware manufacturers. However, no one really had any idea how to implement it, until now that is.
With Turing, NVIDIA is promising something called Variable Rate Shading. Variable Rate Shading or VRS allows the GPU to spend more time rendering the objects or areas of a scene that are more visible, and somewhat reduce the fidelity of the background. This will help increase the image quality and at the same time improve the performance.
Consider a scene where you’re shooting at a horde of zombies. For the most part, you’ll be focusing on the undead and less on the background environment. Here, VRS will reduce the quality of the environment and work more on the zombies and the immediate surroundings.
Multi-view Rendering basically expands the field of view (FOV) in VR games similar to ultra-wide monitors. Traditionally VR headsets have used Single Pass Stereo where both the lenses display the same copy of the image to save resources.
With MVR, VR headsets can render up to four projections (one projection equals one rectangular scene, MVR combines four into a panoramic one). This will vastly improve the immersion and the FOV in VR gaming which till now has been quite limited.
And Then There Was VRLink
If you’ve ever set up a VR headset like the Occulus Rift or the HTC Vive, you would know that it is a major pain. There are multiple cables, one for display, one for power and another for data. VirtualLink solves this problem by combing all these into one cable, all the while providing ample bandwidth. This also allows smaller factor devices like notebooks and ultrabooks to connect to VR headsets, provided they support the applications.
NVIDIA Turing based RTX GPUs have a built-in slot for VRLink and are the first to fully support it.
In case you are a developer, or simply an enthusiast interested to know more about NVIDIA VRWorks, the SDK will be available this September when the new GPUs start shipping.