• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD has caught up to Nvidia in terms of memory latency

SantaC

Member
From being massively behind for several generations, RDNA 2 has taken a big leap and are now playing in the same ballpark as Nvidia.

RDNA 2’s cache is fast and there’s a lot of it. Compared to Ampere, latency is low at all levels. Infinity Cache only adds about 20 ns over a L2 hit and has lower latency than Ampere’s L2. Amazingly, RDNA 2’s VRAM latency is about the same as Ampere’s, even though RDNA 2 is checking two more levels of cache on the way to memory.

In contrast, Nvidia sticks with a more conventional GPU memory subsystem with only two levels of cache and high L2 latency. Going from Ampere’s SM-private L1 to L2 takes over 100 ns. RDNA’s L2 is ~66 ns away from L0, even with a L1 cache between them. Getting around GA102’s massive die seems to take a lot of cycles.

This could explain AMD’s excellent performance at lower resolutions. RDNA 2’s low latency L2 and L3 caches may give it an advantage with smaller workloads, where occupancy is too low to hide latency. Nvidia’s Ampere chips in comparison require more parallelism to shine.




 

Armorian

Banned
They caught up in standard raster performance in general, Nvidia will have a LOT of problems if they keep this up (if RDNA3 is chiplet they have no chance with single GPU LL...), fix RT performance and have working DLSS equivalent.

I will buy whatever GPU has best price/performance ratio, and it is hard to imagine that any future graphics card will have good values in this department looking at current GPU market :messenger_neutral:
 
Last edited:

GreatnessRD

Member
Makes me kind of excited about RDNA3, but gotta see what FXS is all about. Wanted to test out RDN2, but can't be jumping through all these got damn hoops, lmao.
 

Irobot82

Member
It's cute to talk about things none of us can own right now.

Edit:

In contribution to the thread. A 6800xt was a serious contender for me because.

A. By the time I would have gotten it, I assume it would have even a bit more cheaper than a 3080.
B. Ray Tracing to me isn't a deal breaker and I think it still need a gen or two before it's fully servicable.
 
Last edited:

killatopak

Gold Member
Hopefully this means good things for the mid gen refresh for consoles.

Kinda contemplating on not buying the base models since RT seems very basic which can be fixed by mid gen refresh.
 

Alright

Banned
Hopefully this means good things for the mid gen refresh for consoles.

Kinda contemplating on not buying the base models since RT seems very basic which can be fixed by mid gen refresh.
I'm in the same park. By the time TSMC is back to full production and has cleared the backlog (2022?) PSVR2 will be dropping and the mid-gen refreshes - which i think will be a bigger jump than last gen, will be, at most, 12 months away. Might as well wait
 

AGRacing

Member
They likely are ahead on (raw) RT performance. (beating NV in Fortnight, Dirt 5, WoW RT, tie at Godfall)

What you refer as RT performance, though, is largely highly vendor optimized good old shader code.
I have noticed this as well with my *brag* 6900 XT *brag*. Better than 30 series in Fortnite.... a lot worse in Cyberpunk....

Fortnite is genuinely gorgeous with all the features on... but also genuinely unplayable.

VIIIage should be interesting.
 
AMD's advances in gaming GPUs have been more than significant. People don't seem to notice, because they're too busy waving their e-peens around.
More to do with Nvidia still holding the crown than anything else in regards to top-level GPU performance, even if AMD has drastically reduced that gulf. I really love how much AMD has turned around in the last 5 years or so. It was getting frustrating constantly being beholden to Nvidia/Intel only for so long as the lack of competition has been driving up the prices of everything over the last 7 years or so with GPU's/CPU's. Hope to see this shortening gap help bring prices back down across the board.
 

VFXVeteran

Banned
Hopefully this means good things for the mid gen refresh for consoles.

Kinda contemplating on not buying the base models since RT seems very basic which can be fixed by mid gen refresh.
I wouldn't count on a mid-gen refresh. There is no evidence that Sony or MS is even going this route. Last gen was an outlier.

Also, increasing the RT performance via hardware is going to take an entire generation to put it into proper form like the Nvidia boards. It's not a software thing.
 

killatopak

Gold Member
I wouldn't count on a mid-gen refresh. There is no evidence that Sony or MS is even going this route. Last gen was an outlier.

Also, increasing the RT performance via hardware is going to take an entire generation to put it into proper form like the Nvidia boards. It's not a software thing.

There is a precedence though.

Vega into Polaris for the refresh.
 

Buggy Loop

Member
So good that... it didn’t find it’s way into consoles. Ooops? They all saw the option and said « naw fam »

The hitrate at 4K is like 58%, this means it relies more on the slower GDDR6 to fetch data.

It might make more sense in future iterations for MCM, but now this is just a patch to AMD’s leftovers in choices, the leftover crumbs, in memory technology.

Dedicating so much silicon for that, only to fall flat at 4K in usage, and save a whooping ~20 to 30W on PC, is a ridiculous choice. That silicon area dedicated to more CUs, paired with a performing memory, would have whooped Nvidia in rasterization. No wonder Sony and Microsoft nope’d out.
 

Ascend

Member
Really in what way? They are way behind in RT and Machine Learning.
Right... Because those are the only two variables that exist on graphics cards.

On a more serious note... Re-read my comment. I said that AMD's advances in gaming GPU have been more than significant. I made no comparison to any other party. That's how you measure advancements.

Some of us are actually interested in tech. If you're constantly e-peening, of course you'll only see what they are behind in.
 
Last edited:

johntown

Banned
Right... Because those are the only two variables that exist on graphics cards.

On a more serious note... Re-read my comment. I said that AMD's advances in gaming GPU have been more than significant. I made no comparison to any other party. That's how you measure advancements.

Some of us are actually interested in tech. If you're constantly e-peening, of course you'll only see what they are behind in.
No, they are two that I happened to think about that I consider advances.

What advances are you talking about in GPU's? This is all I am really curious about. Or are you just generalizing? Meaning nothing specific just advancing in general?
 

SantaC

Member
No, they are two that I happened to think about that I consider advances.

What advances are you talking about in GPU's? This is all I am really curious about. Or are you just generalizing? Meaning nothing specific just advancing in general?
Rastersation and memory latency. It is like you didnt even read the news article in this thread.
 

Ascend

Member
No, they are two that I happened to think about that I consider advances.

What advances are you talking about in GPU's? This is all I am really curious about. Or are you just generalizing? Meaning nothing specific just advancing in general?
There are many. Their power efficiency improvements, their clock speed improvements, their die size improvements, the infinity cache as a step towards chiplets...
 

Ascend

Member
No, I asked a specific person to elaborate on what they said and that person answered very well.
To elaborate further... Some people forget that the Radeon VII and the 6700XT are on the same node, which is TSMC's 7nm. So... Let's put it all in perspective.

AMD's 6700XT, on the same node, is now 30% faster than the Radeon VII, while having;
  • 40 CUs instead of 60CUs (which means 2560 shading units instead of 3840)
  • 192 bit bus instead of 4096 bit bus
  • 384 GB/s bandwidth instead of 1024 GB/s
  • 160 TMUs instead of 240 TMUs
  • 230W instead of 295W power consumption, while not using HBM
  • The same amount of ROPs
  • Clocks of over 2450 MHz instead of 1750 MHz

That, is quite impressive... They all add up.
But that's not all...
The Radeon VII GPU was 331 mm2, while the 6700XT GPU is 335 mm2, pretty much the same. However, the 6700XT has 17.2 billion transistors instead of the 13.2 billion of the Radeon VII. They managed to have 4 billion more transistors in pretty much the same area.

And now I will make the first comparison to nVidia, but only vaguely, because I don't want to incite red vs green wars in here. But if you compare the advancement from the Radeon VII to 6700XT, with the advancements since the 1080 Ti to now, and remind yourself that there's a node change from 16nm to 12nm to 8nm in each gen of nVidia cards, I'm not sure you can say that nVidia's advancements have been greater.
 
Last edited:

DonkeyPunchJr

World’s Biggest Weeb
To elaborate further... Some people forget that the Radeon VII and the 6700XT are on the same node, which is TSMC's 7nm. So... Let's put it all in perspective.

AMD's 6700XT, on the same node, is now 30% faster than the Radeon VII, while having;
  • 40 CUs instead of 60CUs (which means 2560 shading units instead of 3840)
  • 192 bit bus instead of 4096 bit bus
  • 384 GB/s bandwidth instead of 1024 GB/s
  • 160 TMUs instead of 240 TMUs
  • 230W instead of 295W power consumption, while not using HBM
  • The same amount of ROPs
  • Clocks of over 2450 MHz instead of 1750 MHz

That, is quite impressive... They all add up.
But that's not all...
The Radeon VII GPU was 331 mm2, while the 6700XT GPU is 335 mm2, pretty much the same. However, the 6700XT has 17.2 billion transistors instead of the 13.2 billion of the Radeon VII. They managed to have 4 billion more transistors in pretty much the same area.

And now I will make the first comparison to nVidia, but only vaguely, because I don't want to incite red vs green wars in here. But if you compare the advancement from the Radeon VII to 6700XT, with the advancements since the 1080 Ti to now, and remind yourself that there's a node change from 16nm to 12nm to 8nm in each gen of nVidia cards, I'm not sure you can say that nVidia's advancements have been greater.
6700XT is indeed a big leap, but let’s be honest. Radeon VII marked the transition from 14nm -> 7nm and included an insane 16GB VRAM with ~1Tb/sec bandwidth. Yet it was only about 20% faster than the Vega 64. Enough to reach parity with the 1080 Ti that released for the same price 2 full years earlier. Or, roughly equivalent to a 2080 from a few months earlier except with no RTX/DLSS, higher power consumption, and it sounded like a damn hair dryer.

6700XT looks so impressive in this comparison only because Radeon VII was a terrible gaming GPU that came nowhere the performance you’d expect from a GPU that was 2 process nodes ahead of its predecessor. Almost like 6000 series is finally realizing those gains we “should’ve” had in 2019.
 
Last edited:

Ascend

Member
6700XT is indeed a big leap, but let’s be honest. Radeon VII marked the transition from 14nm -> 7nm and included an insane 16GB VRAM with ~1Tb/sec bandwidth. Yet it was only about 20% faster than the Vega 64. Enough to reach parity with the 1080 Ti that released for the same price 2 full years earlier. Or, roughly equivalent to a 2080 from a few months earlier except with no RTX/DLSS, higher power consumption, and it sounded like a damn hair dryer.

6700XT looks so impressive in this comparison only because Radeon VII was a terrible gaming GPU. Almost like 6000 series is finally realizing those gains we “should’ve” had in 2019.
Well, we can also do it from the 5700XT to the 6700XT. Not quite as impressive, but, still good, and the same node... We have;

6700XT vs 5700XT
30% faster
192 bit bus vs 256 bit bus
384 GB/s vs 448 GB/s
Same amount of CUs
Same amount of TMUs
2450 MHz vs 1800 MHz
230W vs 225W
51.3 MT/mm2 vs 41. MT/mm2
 

llien

Member
from the guy who brought us “Tensor cores are BOVINE FECES because all they do is math... and my AMD GPU can also do math!!”

From the guy who finds putting words into someone mouth despicable, you little asshole: I don't even own an AMD (or NV) GPU at this point (crap inside notebooks doesn't count) and it's the way things been for more than 2 years. My gaming is exclusively on PS4 at this point.

I have never called tensor cores bovine feces, you are mistaking that with own ignorance, little asshole.
 
Last edited:

OverHeat

« generous god »
yqmtHwG.jpg
Got this puppy today, but will keep my 3090 instead. Marketplace it go!
 

spyshagg

Should not be allowed to breed
No, they are two that I happened to think about that I consider advances.

What advances are you talking about in GPU's? This is all I am really curious about. Or are you just generalizing? Meaning nothing specific just advancing in general?

Every GPU since 2006 is using what ATI (AMD) invented - unified shaders - introduced in 2005 with the xbox 360. Before that, pixel shaders and vertex shaders were split. This changed the gaming industry more than you can ever know. AMD is responsible for a lot more innovations that people are using today (either in Nvidia cards, or Intel cpus).

The perception that AMD is behind because they are a "second choice" company is just Internet folklore that formed around 2012 by newcomers. In truth they were almost bankrupted by market manipulation for half a decade by Intel, and without money you cannot develop.

They have money now.
 
Last edited:

ToTTenTranz

Banned
I wouldn't count on a mid-gen refresh. There is no evidence that Sony or MS is even going this route. Last gen was an outlier.

In 2019 Sony filed for a patent about having chiplet GPUs/APUs in a home console.
It's not proof of a mid-gen refresh, but it certainly is evidence, considering it even aligns with AMD's own plans for chiplet-based GPUs.


Also, increasing the RT performance via hardware is going to take an entire generation to put it into proper form like the Nvidia boards. It's not a software thing.
It becomes a software thing if future AAA games/engines push for higher-quality visuals without pushing RT as much as the examples we see in e.g. Cyberpunk.
RT performance doesn't just push from the RT cores, it taxes the compute ALUs too, and that's arguably the most valuable resource in a GPU.

After 2 years of nvidia promoting hybrid RT as the holy grail of next-gen gaming, Epic came in and showed the best looking ever real-time 3D with the Unreal Engine 5 demo, without RT.
The best looking 9th-gen console game for now is, by far IMO, Demon's Souls. Also without RT.

RT is cool in some examples (though in Cyberpunk it looks completely passable in 75% of the examples I've seen IMHO), but It's not like

Throughout this gen, I'm anticipating for RT to be used in much more conservative/optimized/computationally-effective ways than what we're seeing in the latest RTX-branded PC titles.


The Radeon VII GPU was 331 mm2, while the 6700XT GPU is 335 mm2, pretty much the same. However, the 6700XT has 17.2 billion transistors instead of the 13.2 billion of the Radeon VII. They managed to have 4 billion more transistors in pretty much the same area.
The Infinity Cache is pretty dense. 335mm^2 Navi 22 seems to be not that much more than the 251mm^2 Navi 10 plus Infinity Cache.



And now I will make the first comparison to nVidia, but only vaguely, because I don't want to incite red vs green wars in here. But if you compare the advancement from the Radeon VII to 6700XT, with the advancements since the 1080 Ti to now, and remind yourself that there's a node change from 16nm to 12nm to 8nm in each gen of nVidia cards, I'm not sure you can say that nVidia's advancements have been greater.
Why do you not consider working with TSMC to help develop a new node as an advancement?

The effort in designing a chip for a new and developing node isn't the same as designing it for a known quantity. The Radeon VII / Vega 20 was most probably a very important pipe cleaner for AMD to be able to develop RDNA2.
 
Dirt 5 finally has RT available for everyone, as it previously was restricted to a beta branch available only to journalists. :messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy:

Nothing fishy here, for this AMD sponsored game that was 30% faster on AMD hardware and restricted to journalists for RT. Oh noes, what happened with the final public release ? How did AMD end up from 30% ahead to bellow nvidia ?



radeon-2.png
 

llien

Member
to put it into proper form like the Nvidia boards.
What is behind these words? Anything beyond you personal impressions?
Exactly which part of DXR 1.1 on AMD's amazing cache solution (thank you Zen) needs a decade to "catch up" with GPUs, that AMD is already beating in a number of games, despite somehow being "decade behind"?
 
Top Bottom