NVIDIA has been touting their new GeForce RTX cards with the “Graphics reinvented” tagline, emphasizing their raytracing ability to drive an aggressive marketing campaign. Turing may be the first architecture to support raytracing with practical frame rates, but at what cost? That is precisely what we’ll discuss in this post.
On paper, NVIDIA’s claims are quite convincing- RTCores along with Tensor cores to bring raytracing to fruition, but there’s a catch. There’s always a catch, and this became pretty clear when Battlefield V got the raytracing patch.
As soon as you turn on raytracing, the performance takes a rather painful hit, and the FPS gets slashed in less than half, and that is in case of the low and medium presets. At high and ultra, it gets real ugly with even the mighty 2080 Ti not able to sustain a steady 60 FPS and that is 1080p we’re talking about here. Raytracing at QHD or UHD is out of the question for at least another 3-5 years.
The GeForce RTX 2080 barely manages to keep above the 60 FPS mark at the lower settings, and gets reduced to a meager 40 FPS at higher options. The RTX 2070 is another casualty and fails to hit the 60 FPS mark even at the lowest preset at 1080p. The higher settings bring the card down on its knees and doesn’t even manage to yield 40 FPS like its older sibling.
From what we’ve been hearing, the mid-range Turing cards might also sport the RTCores and/or Tensor cores, but that would be a very impractical decision given the RTX 2070’s poor performance with raytracing enabled. In fact, from what we’ve already seen I’d go as far as to say that even the RTX 2070 shouldn’t have had the raytracing capability as it barely manages playable frame rates in EA’s latest title. Furthermore, Battlefield V and Frostbyte are highly optimized, so don’t expect better performance with future RTX enabled games either.
Another important point in this whole Turing/RTX discussion pertains to the price. While I do appreciate NVIDIA bringing real-time raytracing ability to its hardware, I really do and I won’t like many other so-called tech pundits call it useless. It’s a first step, and the first steps often tend to be rough for everyone. However, the price tags don’t really justify it especially the RTX 2070’s. It is supposed to be a budget card, but that $499+ tag puts it in an awkward situation, wherein it can neither run raytracing properly nor reach the average gamer’s system.
And to top it all, I doubt that NVIDIA wasn’t aware of it, and the decision to include RTX in the 2070 was probably taken from a marketing point of view. But that would have actually made more sense. Hell, I’d even go as far as to say that there should have been RTX and non-RTX version of the cards, but that’s not practical and designing so many different SKUs would not have been profitable. With that said however, I really do hope the 2060 and other “budget” Turing cards omit the RTCores and launch with more affordable prices.