According to the latest rumor from 4chan, AMD’s much awaited Navi 10 GPU will cost around $259 and will offer performance somewhere in between NVIDIA’s GeForce GTX 1080 and the Radeon Vega 56. The anonymous poster further goes on to detail the full specifications of the supposed Navi 10 GPU, including the TDP, core clocks, bus-width, cache size as well as the bandwidth.

AMD Radeon

According to this “AMD employee”, the Radeon Navi will bring the Draw Stream Binning Rasterizer back into the light only to be marketed as a “Next Gen” rendering technique. DSBR was first introduced with the Vega architecture and the official white-paper reads:

Vega uses a relatively small number of tiles, and it operates on primitive batches of limited size compared with those used in previous tile-based rendering architectures. This setup keeps the costs associated with clipping and sorting manageable for complex scenes while delivering most of the performance and efficiency benefits.


This means that unlike fully tile-based rendering architectures, the Navi parts are going to use a hybrid form of rasterization somewhere between immediate-mode and TBR.

AMD Radeon Navi

The fact that the official documentation says that Vega uses a “limited number of tiles” and operates on “primitive batches of limited size” means that DSBR isn’t used throughout and it doesn’t break up the entire screen-space into small tiles. Instead, it operates on certain parts of it (or limits the tilesets) and on certain batches of primitives. This reduces the rendering complexity at silicon level as you don’t need a large buffer for the parts of the scene rendered as tiles and their datasets that compete with other resources needing L2.

Related:

It’s not certain whether the Navi cards will use the same form of DSBR as Vega, but considering that it’s based on the same GCN architecture, it is highly likely.

AMD Radeon Navi

The next detail shared by our naughty employee relates to the graphics pipeline. Remember NGG (Next Gen Geometry Pipeline) and primitive shaders that were one of the highlights of the Vega architecture, but never quite materialized because well, the RX Vega GPUs were a bust. Seems like instead of getting developers to adopt these technologies, AMD is making the console designers bake it into the next-gen consoles, making it easier for the PC ports to incorporate them.

The Navi 10 GPU is also suspected to support NGG and primitive shaders. These help improve the efficiency of the GPU by combining the vertex and primitive phase of rendering and allowing early culling of unneeded objects.

The L1 and L2 have apparently been increased with the former bumped up to 32KB while the latter to 3076KB. The Navi GPU will have a 256 bit wide bus and a corresponding bandwidth of 410GB/s. This means that the card will be using GDDR6 memory just like NVIDIA’s Turing architecture. The core clock is also said to boost past 1.8GHz, just north of the custom Navi chip for the PS5.

AMD Radeon Navi

Lastly, the TDP and a rough estimate of the performance have also been mentioned. The whistle-blower states that the Navi GPU will be a 150W part with performance somewhere between the Radeon RX Vega 56 and the GeForce GTX 1080. This means that it’ll be competing against the RTX 2060 or perhaps the GTX 1660 Ti. The former features the RT and Tensor Cores for ray tracing and DLSS, so it’ll be interesting to see if Navi has some native hardware-level support for ray tracing as the lead architect of the PS5, Mark Cerny claimed yesterday.

If this info proves to be legit, then AMD’s Navi will be a potent graphics card, priced on par with the GTX 1660 Ti, but offering RTX 2060 levels of performance (and maybe features? Real-time Ray Tracing anyone?). The former costs $259 while the latter is priced well over the $350 mark, with some partner cards selling for almost $400. So, if this leak actually materializes, then NVIDIA will probably have to slash the price of the RTX 2060 or perhaps come up with a Ti variant to combat Navi.

Read more:

Leave a Reply