NVIDIA RTX: All You Need To Know About Turing Ray-Tracing

    0
    1452
    quake-ii-rtx-og-image

    NVIDIA officially announced their next-gen GeForce RTX Turing GPUs today, right before Gamescom. After a whole week of teasing from both NVIDIA and it’s partners, the RTX 2070, 2080 as well as the $1200 2080 Ti were at long last shown off at NVIDIA’s pre-Gamescom event.

    These new RTX GPUs are supposed to change the way games are rendered, adopting a hybrid pipeline and combining ray-tracing and rasterization. The NVIDIA CEO Jensen Huang has called Turing “NVIDIA’s most important GPU architecture since 2006’s Tesla GPU architecture and the introduction of CUDA”. However, while NVIDIA has been talking endlessly about all the features of Turing, there’s been no word on real world performance numbers.

    NVIDIA GeForce RTX 2080

    This doesn’t bode well for gamers and enthusiasts. The pre-orders for the GeForce Turing GPUs are already open, but the reviews probably won’t be here before the second week of September. Given all the demand and lack of competition from AMD, I doubt stocks will last that long.

    Another glaring aspect of these new RTX graphics cards that is hidden between all the fancy tech is the price tag. The RTX 2080 Ti is priced at $1199 which is nearly twice the cost of the GTX 1080 Ti. The RTX 2080 on the other hand is priced similar to the old flagship at $799, and I seriously doubt the performance delta between the two willt be enough to warrant the extra dollars. Bottomline, seems like GPU prices are set to soar in the coming months.

    NVIDIA Turing: Hybrid Rendering

    Lets talk about the elephant in the room that is NVIDIA RTX. With Turing, NVIDIA is looking to shift the core mechanics of game rendering to a hybrid form- a combination of ray tracing and the orthodox rasterization. To aid in this new endeavor, NVIDIA has specifically included something called an RT Core in Turing.

    These RT cores are built to accelerate raytracing operations so much so that NVIDIA claims these new GPUs can cast 10 Billion (Giga) rays per second which is 25x more proficient than Pascal.

    NVIDIA GeForce RTX 2080

    In addition to an RT Core, Turing also retains the Tensor Cores originally seen in the professional grade Volta GPUs. The job of these Tensor Cores (that btw excel at AI related approximations) is to reduce the workload on the RT Cores by minimizing the rays drawn, leveraging AI denoising. This makes the hybrid rendering model more realistic and allows the GPU to achieve encouraging performance numbers.

    However once again, NVIDIA isn’t providing any concrete performance figures or frame rates regarding the Turing GPUs. Even if you compare the performance gain with regard to Pascal, the official figures say they are 6x faster. But then again, those older Pascal GPUs were so painfully slow at raytracing, it just wasn’t practical. So that begs the question how fast or efficient will these Turing GPUs be at real world ray-tracing operations and how well will they perform in 3D applications like games.

    GDDR6 and Turing Streaming Multiprocessor

    Raytracing may be the main highlight of the new RTX GPUs, but it is not the only new do-dad. These graphics cards will boast the latest GDDR6 memory. NVIDIA’s Turing GPUs will use Samsung’s 14 Gbps chips which although not a massive step up from GDDR5X still alleviates the gap between GDDR and HBM.

    NVIDIA GeForce RTX 2080

    The main advantage performance wise is that internally the memory is now divided into two memory channels per chip. For a standard 32-bit wide chip then, this means a pair of 16-bit memory channels, for a total of 16 such channels on a 256-bit card. As we’ve already highlighted in this piece that GPUs are apt at parallel processing and should excel at this arrangement. Akin to DDR4, GDDR6 also sees reduced operating voltages (1.35) although not as low.

    You can expect GDDR6 to run at around 16 Gbps out of the box as the memory matures and even higher once overclocked.

    VirtualLink and NVLink SLI

    A new generation of NVIDIA GPUs and yet again another GPU gets stripped of SLI. This time its the RTX 2070 that won’t support SLI. Instead, only the top end 2080 and 2080 Ti will be receiving multi-GPU support. The big Turing GPUs that do get SLI though will utilize it using NVIDIA’s proprietary cache coherent GPU interconnect.

    The RTX 2080 and 2080 Ti will leverage the dual channels of the NVLink connector to run SLI. This results in a combined bandwidth of 50 GB/s (both ways). While this may seem like a major step up from the last generation HB SLI bridge used in the 10 series cards, will it save the dying multi-GPU race?

    NVIDIA GeForce RTX 2080

    The short answer to that is no. AFR is still the predominant rendering technique in SLI, and it doesn’t play well with a whole bunch of modern shading effects most notably temporal Anti-aliasing and upscaling algorithms like checkerbox rendering.

    On top of that very few users can actually afford it. That won’t inspire many developers to implement it especially given that it is no easy task and requires more than just a handful of tweaks.

    NVIDIA GeForce RTX 2080

    What you can expect support for however is VirtualLink. As VR grows in popularity, VirtualLink support will too. The USB Type-C alternate mode was announced last month, and supports 15W+ of power, 10Gbps of USB 3.1 Gen 2 data, and 4 lanes of DisplayPort HBR3 video all over a single cable. Basically, it’s a DisplayPort 1.4 connection with extra data and power meant to drive a VR headset. The standard is backed by NVIDIA, AMD, Oculus, Valve, and Microsoft, and the GeForce RTX cards seem to be the first to officially support it

    Conclusion

    NVIDIA’s Turing reveal comes in the wake of absolutely zero competition from AMD, and at a time when thousands of gamers are starved for an upgrade in the aftermath of the cryptocurrency boom. With the flagship RTX 2080 Ti priced at $1199 for the Founders Edition and the lack of performance figures, official or otherwise, a lot is still unknown about NVIDIA’s new graphics cards. So our recommendation would be wait out the first batch of Turing GPUs and only splurge for an upgrade after the reviews are in, when a more informed decision is possible.

    Further reading:

    http://wordpress-695532-2297746.cloudwaysapps.com/directx-raytracing-gdc-2018/
    http://wordpress-695532-2297746.cloudwaysapps.com/nvidia-rtx-gdc/

    Leave a Reply