• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD has caught up to Nvidia in terms of memory latency

spyshagg

Should not be allowed to breed
Dirt 5 finally has RT available for everyone, as it previously was restricted to a beta branch available only to journalists. :messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy:

Nothing fishy here, for this AMD sponsored game that was 30% faster on AMD hardware and restricted to journalists for RT. Oh noes, what happened with the final public release ? How did AMD end up from 30% ahead to bellow nvidia ?



radeon-2.png

Nice graphics showing laten.... oh wait. Wrong topic buddy.

And those charts show at best a Tie for Nvidia. The 6700xt is matching the 2080ti.
 
Nice graphics showing laten.... oh wait. Wrong topic buddy.

And those charts show at best a Tie for Nvidia. The 6700xt is matching the 2080ti.


I was correcting some false claims made in the thread. The results that show nvidia in the lead is actually a tie, you say ? At best ? This was a game that since rdna2 launched until now was included in various benchmarks, especially the one where hardware uboxed made their famous claims of ampere not being that good at 1440p. Dirt 5 and Valhalla were skewing the averages to the tune of 30% each. Other outlets noticed funky rendering with Dirt 5 on nvidia and removed the game from testing. And look at that, after the review period was over, the 30% AMD lead over nvidia not only dissapeared, but it went bellow.
 

spyshagg

Should not be allowed to breed
I was correcting some false claims made in the thread. The results that show nvidia in the lead is actually a tie, you say ? At best ? This was a game that since rdna2 launched until now was included in various benchmarks, especially the one where hardware uboxed made their famous claims of ampere not being that good at 1440p. Dirt 5 and Valhalla were skewing the averages to the tune of 30% each. Other outlets noticed funky rendering with Dirt 5 on nvidia and removed the game from testing. And look at that, after the review period was over, the 30% AMD lead over nvidia not only dissapeared, but it went bellow.

Look, I dont care about what people say. I can see the graphs myself, and what I said is what the numbers say for this particular game.

You argue like you are offended by any positive AMD news.
 
Last edited:
Look, I dont care about what people say. I can see the graphs myself, and what I said is what the numbers say for this particular game.

You argue like you are offended by any positive AMD news.


First of all, you should relax a bit. Who are you even talking to ? Second, try harder on the reading comprehension then. Maybe you'll reach a point when you get what the information actually says, not your fictional interpretation of it
 

thelastword

Banned
All I have to say is let Nvidia and it's fans still put their heads in the sand like Intel did.....RDNA 3 is going to be straight bonkers since RDNA 2 is already so fast and competing on the high end....Best rasterization GPU's pound per pound....The way they will improve Infinity cache, decrease latency even more or even bring a chiplet designed GPU is going to turn this GPU war inside out.....I imagine AMD can put a chiplet in just for raytracing if they want to.....One thing is sire, AMD is going to bring even more heat for the next round....
 

VFXVeteran

Banned
In 2019 Sony filed for a patent about having chiplet GPUs/APUs in a home console.
It's not proof of a mid-gen refresh, but it certainly is evidence, considering it even aligns with AMD's own plans for chiplet-based GPUs.
AMD may have new technology coming but that doesn't mean they are forced to have to implement their new tech on the console first.

It becomes a software thing if future AAA games/engines push for higher-quality visuals without pushing RT as much as the examples we see in e.g. Cyberpunk.
RT performance doesn't just push from the RT cores, it taxes the compute ALUs too, and that's arguably the most valuable resource in a GPU.
It's the most valuable resource in a GPU if it's NOT doing any RT. The ALUs don't have the bandwidth to compute recursive RT with continuous triangular intersection tests.

After 2 years of nvidia promoting hybrid RT as the holy grail of next-gen gaming, Epic came in and showed the best looking ever real-time 3D with the Unreal Engine 5 demo, without RT.
That is very subjective. While I love the Nanite tech, the demo lacked in a number of rendering features that are more important than geometry like PBR shaders.

The best looking 9th-gen console game for now is, by far IMO, Demon's Souls. Also without RT.
Demon Souls detailed texturing system is incredible, but everything else in the game is par the course. It doesn't have above average lighting, FX are standard, and the shaders are subpar compared to other games that focused on PBR shaders. But I agree it has the best texturing detail coupled with the old school expensive POM so far.
 
Last edited:

Buggy Loop

Member
All I have to say is let Nvidia and it's fans still put their heads in the sand like Intel did.....RDNA 3 is going to be straight bonkers since RDNA 2 is already so fast and competing on the high end....Best rasterization GPU's pound per pound....The way they will improve Infinity cache, decrease latency even more or even bring a chiplet designed GPU is going to turn this GPU war inside out.....I imagine AMD can put a chiplet in just for raytracing if they want to.....One thing is sire, AMD is going to bring even more heat for the next round....

Comparing Nvidia to Intel’s decade slumber and in-house foundry...

Keegan-Michael Key Lol GIF by HULU


RDNA 2 is not even a rasterization power house, I mean it’s competing, that’s it, that’s the cake you can eat to celebrate. While Nvidia is focusing on ML and RT, dedicating a lot of silicon area to that, AMD went all rasterization CUs and managed to be within ~3% less at 4K. The question is how the fuck did they not sucker punch Nvidia? If you’re building a drag racer for a straight line race (rasterization) against a McLaren F1 who is made more for curves (ML/RT), but you still almost finish equal, someone fucked up.

MCM is also in Nvidia’s plans, like almost every chipset maker around the world because everyone is seeing the upcoming diminishing returns of a monolithic design, in the coming years.

It’s cute to still see so much AMD Hope after all those decades, I used to too, even in this forum all the way back to 2004 (and I was ATI much further back in 1996)
 
Last edited:

VFXVeteran

Banned
What is behind these words? Anything beyond you personal impressions?
Exactly which part of DXR 1.1 on AMD's amazing cache solution (thank you Zen) needs a decade to "catch up" with GPUs, that AMD is already beating in a number of games, despite somehow being "decade behind"?
I never said a decade behind. AMD boards are behind the curve to me because of it's non-competitive tech @ native 4k. It's rasterizer is good at lower resolutions but it needs to compete at native 4k and above. That would represent forward thinking for me and show a focus on raw throughput of number of pixels pushing through the pipeline. If the AMD boards had specific hardware chips for RT as well as hardware tensor cores for AI-based image reconstruction and high throughput at native 4k with rasterization, I would consider it a significant competitor to Nvidia. Those things can't be done in the next 2yrs for a mid-gen refresh console and certainly not for $500.
 

Armorian

Banned

How old are these results?

I was correcting some false claims made in the thread. The results that show nvidia in the lead is actually a tie, you say ? At best ? This was a game that since rdna2 launched until now was included in various benchmarks, especially the one where hardware uboxed made their famous claims of ampere not being that good at 1440p. Dirt 5 and Valhalla were skewing the averages to the tune of 30% each. Other outlets noticed funky rendering with Dirt 5 on nvidia and removed the game from testing. And look at that, after the review period was over, the 30% AMD lead over nvidia not only dissapeared, but it went bellow.

I like that there is no game that has more screwed up performance on one vendor vs average than Valhalla (and D5 before), yet it's on every benchmark list almost (even Control isn't close), and his game performs like shit even on AMD GPUs, Ubisoft really outdid themselves here :messenger_tears_of_joy: The best part is ACV is not really using Nvidia GPU at full capacity, ~95% GPU usage all the time and power draw ~150W compared to over 200W in "normal" games (on my 3070).
 
Last edited:

spyshagg

Should not be allowed to breed
Comparing Nvidia to Intel’s decade slumber and in-house foundry...

Keegan-Michael Key Lol GIF by HULU


RDNA 2 is not even a rasterization power house, I mean it’s competing, that’s it, that’s the cake you can eat to celebrate. While Nvidia is focusing on ML and RT, dedicating a lot of silicon area to that, AMD went all rasterization CUs and managed to be within ~3% less at 4K. The question is how the fuck did they not sucker punch Nvidia? If you’re building a drag racer for a straight line race (rasterization) against a McLaren F1 who is made more for curves (ML/RT), but you still almost finish equal, someone fucked up.

MCM is also in Nvidia’s plans, like almost every chipset maker around the world because everyone is seeing the upcoming diminishing returns of a monolithic design, in the coming years.

It’s cute to still see so much AMD Hope after all those decades, I used to too, even in this forum all the way back to 2004 (and I was ATI much further back in 1996)

That is exactly the same speech I saw people parroting before the following products launched: A64, 9800PRO, HD4870, HD7970, R9 290X, Ryzen, RX6900XT.

AMD dropped the ball from 2012~2017 from almost being left bankrupt due to competition fraud, and suddenly a bunch of people think AMD is a loser forever. You guys never learn. "Cute" indeed.
 

johntown

Banned
Every GPU since 2006 is using what ATI (AMD) invented - unified shaders - introduced in 2005 with the xbox 360. Before that, pixel shaders and vertex shaders were split. This changed the gaming industry more than you can ever know. AMD is responsible for a lot more innovations that people are using today (either in Nvidia cards, or Intel cpus).

The perception that AMD is behind because they are a "second choice" company is just Internet folklore that formed around 2012 by newcomers. In truth they were almost bankrupted by market manipulation for half a decade by Intel, and without money you cannot develop.

They have money now.
There is no perception they are behind in the GPU market. It is a fact that has been happening for a while. I'm not saying they never had any innovation or advancements but for a few years now they have been the lower end budget GPU. Can this change? Of course it can but AMD has a lot of work to do to catch up in the GPU market.
 

spyshagg

Should not be allowed to breed
I never said a decade behind. AMD boards are behind the curve to me because of it's non-competitive tech @ native 4k. It's rasterizer is good at lower resolutions but it needs to compete at native 4k and above. That would represent forward thinking for me and show a focus on raw throughput of number of pixels pushing through the pipeline. If the AMD boards had specific hardware chips for RT as well as hardware tensor cores for AI-based image reconstruction and high throughput at native 4k with rasterization, I would consider it a significant competitor to Nvidia. Those things can't be done in the next 2yrs for a mid-gen refresh console and certainly not for $500.

You truly have no right in being "vetted". Its a disgrace.

relative-performance_1920-1080.png

relative-performance_2560-1440.png

relative-performance_3840-2160.png



It beats or matches the competition at two resolutions, and Its 5% down on 4K. The 5% is an average FPS difference of:

T3PJMsu.png



"non-competitive". Uh.
 
Last edited:

FireFly

Member
Dirt 5 and Valhalla were skewing the averages to the tune of 30% each. Other outlets noticed funky rendering with Dirt 5 on nvidia and removed the game from testing. And look at that, after the review period was over, the 30% AMD lead over nvidia not only dissapeared, but it went bellow.
It's pretty normal for each vendor to catch up on games where they are behind, in successive driver releases. Just look at AMD catching up on rasterization performance in Cyberpunk. But it's not clear that AMD doing well in rasterization at 1440p relied only on "anomalous" performances. Right now, the 6800XT is 4 percentage points ahead of the 3080 at 1440p and leads in more titles, in the latest Techpowerup suite:

 

Whitecrow

Banned
They likely are ahead on (raw) RT performance. (beating NV in Fortnight, Dirt 5, WoW RT, tie at Godfall)

What you refer as RT performance, though, is largely highly vendor optimized good old shader code.
Nvidia have tensor cores which are specialized in the math that RT needs.

AMD doesnt have this specialized silicon.
 
About Valhalla and Dirt 5 "skewing" averages, I remember seeing plenty of times Hardware Unboxed and others showing and discussing averages excluding the outliers. People's memory is selective at their convenience, but why those two games should be excluded only because their perform so much better on one ardware? These benchmarks always had the current Dirt and Assassins Creed among the games tested, why both need to be excluded now?

And yes, AMD catch up with Nvidia in rasterization. It's slower in 4K for obvious reasons, lack of bandwidth, but AMD made their bet knowing very well that most gamers don't even have a 4K screen to play and most are more concerned with high refresh rates.
 

ToTTenTranz

Banned
AMD may have new technology coming but that doesn't mean they are forced to have to implement their new tech on the console first.
It's not a patent filed by AMD. I pointed to a patent filed by Sony.

It's the most valuable resource in a GPU if it's NOT doing any RT.
It's the most valuable resource because you can do rasterization without RT cores, but you can't do RTRT without GPU compute ALU.
In fact, you can even do RTRT with GPU compute and no dedicated BVH units (slowly of course).


That is very subjective. While I love the Nanite tech, the demo lacked in a number of rendering features that are more important than geometry like PBR shaders.
The UE5 demo being the best looking example out there is subjective? Ask a statistically relevant number of people on which real-time rendering example is the best out there and it won't be subjective at all. Perhaps we could do a poll right here in the forum.


Demon Souls detailed texturing system is incredible, but everything else in the game is par the course. It doesn't have above average lighting, FX are standard, and the shaders are subpar compared to other games that focused on PBR shaders. But I agree it has the best texturing detail coupled with the old school expensive POM so far.
I'm pretty sure Demon's Souls is also using unprecedented amounts of geometry for a console game, along with a pretty good global illumination (voxel based perhaps?). It's not just high-res textures or texture streaming. It's also a 60FPS release-day game so there's obviously room for improvement for future games.

Regardless, the "how" is relevant to very few stakeholders in the gaming industry (certainly not to 99.99% of gamers or critics). Demon's Souls is the best looking game out of any current 9th-gen console out there according to pretty much every review I saw so far (and my own opinion as I just recently finished the game). On the other side of the coin there's IIRC Fallout 4 using PBR with mediocre results, or Control with RT without winning any IQ awards.

If "subpar shaders" and no RT is the best way to achieve spectacular visuals, then maybe a number of devs should rethink their approach. Or perhaps the best answer is the most important popular opinion about how good a videogame looks doesn't scale linearly with the amount of state-of-the-art rendering techniques being adopted.
 

ToTTenTranz

Banned
Nvidia have tensor cores which are specialized in the math that RT needs.

AMD doesnt have this specialized silicon.
AFAIK RT doesn't use tensor cores.
DLSS uses tensor cores, and for many games most Nvidia GPUs need to use DLSS to get RT at acceptable framerates, but that's a different subject. According to Nvidia's own presentations the tensor cores take no place in the RT pipeline.

RDNA 2 is not even a rasterization power house, I mean it’s competing, that’s it, that’s the cake you can eat to celebrate. While Nvidia is focusing on ML and RT, dedicating a lot of silicon area to that, AMD went all rasterization CUs and managed to be within ~3% less at 4K. The question is how the fuck did they not sucker punch Nvidia?

Navi 21 isn't a 630mm^2 behemoth of a chip using 384bit of an exquisite and expensive memory architecture.
Compared to GA102 it's using less power, on a less expensive PCB and using less amounts of less expensive JEDEC-standardized GDDR6 chips.

Navi 21 cards have a massive headroom for scaling down in price, compared to GA102. The fact that they're competing in rasterization performance despite that is pretty impressive.
 
Last edited:

Buggy Loop

Member
That is exactly the same speech I saw people parroting before the following products launched: A64, 9800PRO, HD4870, HD7970, R9 290X, Ryzen, RX6900XT.

AMD dropped the ball from 2012~2017 from almost being left bankrupt due to competition fraud, and suddenly a bunch of people think AMD is a loser forever. You guys never learn. "Cute" indeed.

I’ve owned almost exclusively AMD GPUs in my 25 years of PC building so don’t come lecturing me on what AMD did or did not do right. You can even search this forum for my posts about AMD throughout the years. The first Nvidia I ever bought was pascal.

Who here said that AMD never competed?

Jim Carrey What GIF


These are the cards (unicorns) that are here, today. If both of them were in the table right now at MSRP, I wouldn’t pick AMD. Not that it’s a bad product, it’s just an ok product for a very selective group who absolutely don’t want to use RT or don’t believe how ML will be disruptive in the coming years in all aspects of video games. You’re satisfied with the offer? Good for you! Hell, even finding any of them at this point, is an achievement.

But to compare Nvidia to Intel, or GPU technologies and how they change overtime compared to x86 CPUs, and looking at Ryzen’s 4 years climb on the best node on the planet just to get it’s head above water against Intel’s 14++++++++++++++++ node, yes, that’s cute.

And this is coming from a 1st gen Zen owner, 1600, to a recently 3rd gen with 5600X, no, even if Zen proposition made more sense for me since the beginning, I’m not sucking AMD cock and saying Intel is doomed or that AMD sauce is the best. Soon as they got their head above water they raised the price. That’s a shit move. Intel slept at the wheel, they’re stuck on a foundry problem, but when they wake up? Well competition will be interesting.

I don’t think anyone can come in here and say with a straight face that Nvidia is sleeping at the wheel..
 
Last edited:
About Valhalla and Dirt 5 "skewing" averages, I remember seeing plenty of times Hardware Unboxed and others showing and discussing averages excluding the outliers. People's memory is selective at their convenience, but why those two games should be excluded only because their perform so much better on one ardware? These benchmarks always had the current Dirt and Assassins Creed among the games tested, why both need to be excluded now?

And yes, AMD catch up with Nvidia in rasterization. It's slower in 4K for obvious reasons, lack of bandwidth, but AMD made their bet knowing very well that most gamers don't even have a 4K screen to play and most are more concerned with high refresh rates.


You dont need to exclude them because they run better on some. The thing is here, theres something defective for nvidia. When you take 2 cards of similar performance and one game runs 30% faster on one, for no apparent reason, thats indicative of something being wrong. In this case, you have both of these, Dirt and Valhalla being AMD games, with big banners on their site. I think its pretty clear that Dirt 5 was doing some nasty stuff under the hood for nvidia. After a lot of reviewers were including this game with its special beta branch for reviewers only and skewing the results, after that passed you now have nvidia leading the performance ? How did 30% ahead evaporate and became 10% bellow ?

Hardware unboxed made that incorrect article last year how rdna2 is better for 1440p. Then procedeed to retest the cards 2 months ago and now they had the 3080 and 6800XT tied at 1440p. But now if you remove that disproportionate Dirt 5 scores, the 3080 is ahead in every resolution, even by their own tests.

Its not wrong when some games perform better on one vendor, it happens. Its when the difference is so extreme and disproportionate as it was with Dirt and Valhalla that its an issue. Because you have these 2 outliers that are heavily distorting the end results, the average and are giving you an incorrect performance profile if you're not looking at the details. You only need to look at hardware umboxed, makes article how ampere isnt suited for 1440p. Retests the cards later and they're tied now. Retes Dirt 5 and now ampere leads at 1440p. If they'd have removed Dirt from the start, they would have gotten the results we have now last year
 

spyshagg

Should not be allowed to breed
These are the cards (unicorns) that are here, today. If both of them were in the table right now at MSRP, I wouldn’t pick AMD.

So, its all your opinion then. Just your opinion.
It would appear from the previous post you made that you were spewing facts about AMD never being the best one on MERIT, while at the same time calling us ( the ones who respect the actual computer history) as "cute".

yeah. Looking at your rhetoric I would hire you on the spot alright.
 

spyshagg

Should not be allowed to breed
You dont need to exclude them because they run better on some. The thing is here, theres something defective for nvidia. When you take 2 cards of similar performance and one game runs 30% faster on one, for no apparent reason, thats indicative of something being wrong. In this case, you have both of these, Dirt and Valhalla being AMD games, with big banners on their site. I think its pretty clear that Dirt 5 was doing some nasty stuff under the hood for nvidia. After a lot of reviewers were including this game with its special beta branch for reviewers only and skewing the results, after that passed you now have nvidia leading the performance ? How did 30% ahead evaporate and became 10% bellow ?

Hardware unboxed made that incorrect article last year how rdna2 is better for 1440p. Then procedeed to retest the cards 2 months ago and now they had the 3080 and 6800XT tied at 1440p. But now if you remove that disproportionate Dirt 5 scores, the 3080 is ahead in every resolution, even by their own tests.

Its not wrong when some games perform better on one vendor, it happens. Its when the difference is so extreme and disproportionate as it was with Dirt and Valhalla that its an issue. Because you have these 2 outliers that are heavily distorting the end results, the average and are giving you an incorrect performance profile if you're not looking at the details. You only need to look at hardware umboxed, makes article how ampere isnt suited for 1440p. Retests the cards later and they're tied now. Retes Dirt 5 and now ampere leads at 1440p. If they'd have removed Dirt from the start, they would have gotten the results we have now last year

1) you have no proof of AMD paying devs to cheat on Nvidia performance

2) you cannot complain of AMD paying devs to cheat on Nvidia performance, because Nvidia themselves did exactly that 5 years ago. Oh, and they also cheated on image quality on the 2000's vs ATI and got caught red handed. Oh my god. The amount of times Nvidia cheated in the 2000's was ridiculous. This cannot be a surprise to anyone.

So, what you are unhappy about, its that there must be genuine merit on AMD current performance, but your small brain is only ready to accept conspiracies. Fair enough, I'll give you a few more years to mature.
 
Last edited:

VFXVeteran

Banned
You truly have no right in being "vetted". Its a disgrace.

relative-performance_1920-1080.png

relative-performance_2560-1440.png

relative-performance_3840-2160.png



It beats or matches the competition at two resolutions, and Its 5% down on 4K. The 5% is an average FPS difference of:

T3PJMsu.png



"non-competitive". Uh.
Dude, can you not be a simple child in trying to make me look bad to make a reasonable argument? It shows the immaturity and saltiness of people and my "vetted" status. If you want to be declared vetted, then ask the mods. Don't hate on me because I work in the industry. It's not a bragging title. It's a title made by NeoGAF.

My comment still stands, it's behind at native 4k resolutions. I don't care if it's 2%. AMD spent the wrong resources focusing on rasterization and STILL isn't on top in that department let alone all the other shortcomings with regards to AI and RT.
 

Buggy Loop

Member
Navi 21 isn't a 630mm^2 behemoth of a chip using 384bit of an exquisite and expensive memory architecture.
Compared to GA102 it's using less power, on a less expensive PCB and using less amounts of less expensive JEDEC-standardized GDDR6 chips.

Navi 21 cards have a massive headroom for scaling down in price, compared to GA102. The fact that they're competing in rasterization performance despite that is pretty impressive.

Samsung's MTr/mm^2 vs TSMC's is mainly responsible for the die size, there's 1.5M transistors difference which is... ~5% difference. Most of the area difference is a difference of foundry, not the chipmaker's technology.

Exquisite and expensive memory architecture? I mean, if you have any information on the price Nvidia paid, please share. I'm pretty sure nobody knows. It's pretty much a logical conclusion that Samsung signed a complete package deal to Nvidia for the foundry and the memory. Nvidia isn't some nobody picking parts at the store at full price per unit.

Less expensive GDDR6 vs GDDR6x, yes definitely, but is that the whole picture? No. SRAM has a cost, enormous cost in fact, if it's not reflecting as much as a "component list of material" like GDDR6 is, it's definitely an expensive cost with how much area is dedicated to it and to be manufactured in one of the planet's best foundry.

And all that for.. ~20 to 30 Watts difference.. What does it say?
Can we really say it's the memory choice? The foundry node difference alone should make this an even bigger difference. Nvidia can take their 5% lead at 4K and undervolt their cards with a patch to meet AMD's performances and be like what, 60W less?

As for less expensive PCB, i believe that got debunked quite early when we saw AIB pricing. Only AMD is providing MSRP, and after they wanted to pull out a month after launch, they got called out for it and reversed their decision. But overall? All AIB options are almost always more expensive than Nvidia's. So that means the chip cost coming in is more expensive, as they basically have the same cooling platform between them.

I'll reverse your last question : How is Nvidia competing in rasterization against a card almost exclusively made for it, when they dedicated so much to RT & ML silicon area? The fact that they're competing in rasterization performance despite that, is mind blowing.
 
Last edited:
1) you have no proof of AMD paying devs to cheat on Nvidia performance

2) you cannot complain of AMD paying devs to cheat on Nvidia performance, because Nvidia themselves did exactly that 5 years ago. Oh, and they also cheated on image quality on the 2000's vs ATI and got caught red handed. Oh my god. The amount of times Nvidia cheated in the 2000's was ridiculous. This cannot be a surprise to anyone.

So, what you are unhappy about, its that there must be genuine merit on AMD current performance, but your small brain is only ready to accept conspiracies. Fair enough, I'll give you a few more years to mature.


Im not accusing because, yes, we have no proof. Its just interesting how this whole affair developed. Indeed, nvidia did a lot of shit over the years. Im not in love with either vendor. You should always chose whats best at any given moment. I had radeons back when they were kicking nvidias ass, during the radeon 9700/9800 times.

Im not unhappy about anything, we're just pointing out various points of interest here that happened
 

VFXVeteran

Banned
It's not a patent filed by AMD. I pointed to a patent filed by Sony.


It's the most valuable resource because you can do rasterization without RT cores, but you can't do RTRT without GPU compute ALU.
In fact, you can even do RTRT with GPU compute and no dedicated BVH units (slowly of course).

I'm talking about the BVH acceleration though. No one in their right mind would prefer using the TMUs for testing intersection when they could be used for evaluating shaders. I don't understand your point here. Nvidia is ahead on this. Period.

The UE5 demo being the best looking example out there is subjective? Ask a statistically relevant number of people on which real-time rendering example is the best out there and it won't be subjective at all. Perhaps we could do a poll right here in the forum.
UE5 demo is stellar because of it's geometry tessellation. That is all. Lumen isn't all that as there are several other custom GI solutions that are the same. You also can't possibly make something subjective be objective as fact. No matter how pretty those static rocks look (which can't deform), the overall look of the demo itself isn't very impressive. We can go into more detail if you want or maybe you can make a graphics analysis thread on the pros/cons why you think geometric tessellation is everything to a rendering pipeline. I'd gladly objectively prove to you that it's lighting/shading instead.

I'm pretty sure Demon's Souls is also using unprecedented amounts of geometry for a console game, along with a pretty good global illumination (voxel based perhaps?). It's not just high-res textures or texture streaming. It's also a 60FPS release-day game so there's obviously room for improvement for future games.
Like I said in my analysis thread:


Prove to me that it uses large amounts of tessellation. It just isn't there. I proved my points with videos and examples in my thread.
 

spyshagg

Should not be allowed to breed
Dude, can you not be a simple child in trying to make me look bad to make a reasonable argument? It shows the immaturity and saltiness of people and my "vetted" status. If you want to be declared vetted, then ask the mods. Don't hate on me because I work in the industry. It's not a bragging title. It's a title made by NeoGAF.

My comment still stands, it's behind at native 4k resolutions. I don't care if it's 2%. AMD spent the wrong resources focusing on rasterization and STILL isn't on top in that department let alone all the other shortcomings with regards to AI and RT.

Vetted implies a level of trust on someones comments and opinion. A trust that judging from ALL you post history moral conduct, you do not warrant. And people who do not know best are led to believe your fraudulent and biased claims.

The problem is not some one being vetted. Its you being vetted. You probably manage assets or some janitor-like function in the industry, and it you behave like you have the god given right to claim a 5% difference is not being "competitive". In the other Resident Evil thread you LED people to believe the game has poor or absent RT, which is a LIE.

Carry on.
 

Buggy Loop

Member
So, its all your opinion then. Just your opinion.
It would appear from the previous post you made that you were spewing facts about AMD never being the best one on MERIT, while at the same time calling us ( the ones who respect the actual computer history) as "cute".

yeah. Looking at your rhetoric I would hire you on the spot alright.

Isn't it always opinions? Lol. Nobody here, on this entire forum, is expressing anything else than an opinion in the end. You interpret a result one way, it can be interpreted differently by someone else depending on needs. If i could pick an Ampere card, i would, i prefer DLSS, VR & RT capabilities, it also ends up being faster in rasterization at 4k which is the cherry on the cake. Not to mention, VR streaming encoder which Nvidia does way better.

One of the longest lasting card i had was the R9 280x and even today it's still active after i gave it to someone. It's not my best memory of AMD because of pure raw performance vs competition, i'm pretty sure Nvidia at the time had a stronger competitor, it's my best memory because for the time, the value/bang, especially at the price i had gotten it, was unbeatable. I'm sure many felt the same throughout AMD's years. Has AMD been fucked sideways throughout the years? Yes! Intel fucked them with the 64-bit debacle, Nvidia fucked with them by cheating here and there. But ultimately, i don't care. I'm not giving money to an indie rock band here because of their hardship in life, i always buy the best bang/buck, whatever generation. If RDNA 3 beats them in the overall package of ML/RT/Rasterization, i'll make a 180 on a dime.
 

spyshagg

Should not be allowed to breed
Im not unhappy about anything, we're just pointing out various points of interest here that happened

Did you read this topic title, and the discussion in it until you posted?

You could not help yourself to come in here and spin the conversation the way it makes you happy. You could not live with yourself if any - ANY - positive AMD topic could go on without people knowing Nvidia is better.
 

VFXVeteran

Banned
Vetted implies a level of trust on someones comments and opinion. A trust that judging from ALL you post history moral conduct, you do not warrant. And people who do not know best are led to believe your fraudulent and biased claims.

The problem is not some one being vetted. Its you being vetted. You probably manage assets or some janitor-like function in the industry, and it you behave like you have the god given right to claim a 5% difference is not being "competitive". In the other Resident Evil thread you LED people to believe the game has poor or absent RT, which is a LIE.

Carry on.
Dude, I didn't lead anyone in the RE thread. Everyone made their own observations. It does look like poor RT from what I've seen. But I can guarantee you that when I get the game, everyone will want to read my analysis on the graphics. That's where the "vetted" tag becomes meaningful. You don't earn any brownie points trying to follow me around claiming I don't know shit when that's clearly not the case.

And the janitor-like comment is hilarious. If you really want to know what I did/do .. do a search for it on these boards. I'm sure you'll find out.
 

OverHeat

« generous god »
Vetted implies a level of trust on someones comments and opinion. A trust that judging from ALL you post history moral conduct, you do not warrant. And people who do not know best are led to believe your fraudulent and biased claims.

The problem is not some one being vetted. Its you being vetted. You probably manage assets or some janitor-like function in the industry, and it you behave like you have the god given right to claim a 5% difference is not being "competitive". In the other Resident Evil thread you LED people to believe the game has poor or absent RT, which is a LIE.

Carry on.
Look at tag...🤔
 

spyshagg

Should not be allowed to breed
Isn't it always opinions? Lol. Nobody here, on this entire forum, is expressing anything else than an opinion in the end. You interpret a result one way, it can be interpreted differently by someone else depending on needs. If i could pick an Ampere card, i would, i prefer DLSS, VR & RT capabilities, it also ends up being faster in rasterization at 4k which is the cherry on the cake. Not to mention, VR streaming encoder which Nvidia does way better.

One of the longest lasting card i had was the R9 280x and even today it's still active after i gave it to someone. It's not my best memory of AMD because of pure raw performance vs competition, i'm pretty sure Nvidia at the time had a stronger competitor, it's my best memory because for the time, the value/bang, especially at the price i had gotten it, was unbeatable. I'm sure many felt the same throughout AMD's years. Has AMD been fucked sideways throughout the years? Yes! Intel fucked them with the 64-bit debacle, Nvidia fucked with them by cheating here and there. But ultimately, i don't care. I'm not giving money to an indie rock band here because of their hardship in life, i always buy the best bang/buck, whatever generation. If RDNA 3 beats them in the overall package of ML/RT/Rasterization, i'll make a 180 on a dime.


You said this:


It’s cute to still see so much AMD Hope after all those decades, I used to too, even in this forum all the way back to 2004 (and I was ATI much further back in 1996)


There was only one meaning to this phrase you made, and I called you on it. No reason to be defensive on your intention there. etc etc true colors and all that.
 

spyshagg

Should not be allowed to breed
Dude, I didn't lead anyone in the RE thread. Everyone made their own observations. It does look like poor RT from what I've seen. But I can guarantee you that when I get the game, everyone will want to read my analysis on the graphics. That's where the "vetted" tag becomes meaningful. You don't earn any brownie points trying to follow me around claiming I don't know shit when that's clearly not the case.

And the janitor-like comment is hilarious. If you really want to know what I did/do .. do a search for it on these boards. I'm sure you'll find out.

Would it not be a wonderful world if you could be two opposing things at once? Oh just imagine. To both claim to be a reputable responsible professional , and in complete opposition behave the way you do here. Pick one because they are not the same.

I've seen all your post history, you have a forum warrior mentality, your have a complete biased towards manufacturers, and worse still is your moral corruption as seen in the RE8 thread. I would not hire you to clean a toiled.
 
Did you read this topic title, and the discussion in it until you posted?

You could not help yourself to come in here and spin the conversation the way it makes you happy. You could not live with yourself if any - ANY - positive AMD topic could go on without people knowing Nvidia is better.

You should be happy when incorrect information is pointed out. You should press F
 

ToTTenTranz

Banned
Exquisite and expensive memory architecture? I mean, if you have any information on the price Nvidia paid, please share. I'm pretty sure nobody knows. It's pretty much a logical conclusion that Samsung signed a complete package deal to Nvidia for the foundry and the memory. Nvidia isn't some nobody picking parts at the store at full price per unit.

Less expensive GDDR6 vs GDDR6x, yes definitely, but is that the whole picture? No. SRAM has a cost, enormous cost in fact, if it's not reflecting as much as a "component list of material" like GDDR6 is, it's definitely an expensive cost with how much area is dedicated to it and to be manufactured in one of the planet's best foundry.

Why are you questioning a statement just to prove it in the following sentences? The GA102 uses 10 or 12 chips of GDDR6X, which is made on a single foundry using the first PAM4 mass market implementation to date. Navi 21 uses 8 GDDR6 chips rated at 16Gbps chips that can be made by Samsung, SK Hynix and Micron. GDDR6X has an even more exclusive list of producers than HBM.
Of course the external memory subsystem and the PCB of GA102 is more expensive and therefore harder to scale down in price than Navi 21's.
You can argue whether or not the Navi 21 chip is more expensive to fab than a GA102, though considering how long 7nm has been around plus the die size difference I doubt you'd find any relevant conclusion.



And all that for.. ~20 to 30 Watts difference.. What does it say?
Can we really say it's the memory choice? The foundry node difference alone should make this an even bigger difference. Nvidia can take their 5% lead at 4K and undervolt their cards with a patch to meet AMD's performances and be like what, 60W less?
What's your suggestion here? That only Nvidia cards are able to lower the clocks and voltage to increase performance per watt?
There's a SoC out there with a 12TFLOPs RDNA2 iGPU + 8 core Zen2 + 320bit GDDR6 consuming 200W at the wall.
None of the current desktop GPUs are operating within their ideal clock/power curves.

We'll see how the notebook Ampere cards compare to the RDNA2 ones. I happen to know of a pretty good source who claims RDNA2 is most of all an architecture engineered for power efficiency because of laptops and consoles, and Navi 22/23 will excel in the mid to high-end gaming notebook market.


I'll reverse your last question : How is Nvidia competing in rasterization against a card almost exclusively made for it, when they dedicated so much to RT & ML silicon area? The fact that they're competing in rasterization performance despite that, is mind blowing.
They did not. The area dedicated to RT and ML is tiny compared to the area dedicated to general compute units, especially in Ampere where they doubled the FP32 throughput.
I can clearly see how your mind is blown, though.



I'm talking about the BVH acceleration though. No one in their right mind would prefer using the TMUs for testing intersection when they could be used for evaluating shaders. I don't understand your point here. Nvidia is ahead on this. Period.
Nvidia being ahead in RT performance doesn't mean Nvidia is ahead in producing the best real-time rendering visuals at a given cost and power consumption.
They certainly weren't chosen for Sony or Microsoft's consoles. And this time around they had their own high-performance CPU cores to make a high-performance console SoC so that excuse it out the window. And for BC, Microsoft's platform works over a virtual machine regardless.
DLSS2 is nice but it's neither widespread nor will it be without a competitor for long, according to their competitor.



UE5 demo is stellar because of it's geometry tessellation. That is all. Lumen isn't all that as there are several other custom GI solutions that are the same. You also can't possibly make something subjective be objective as fact. No matter how pretty those static rocks look (which can't deform), the overall look of the demo itself isn't very impressive. We can go into more detail if you want or maybe you can make a graphics analysis thread on the pros/cons why you think geometric tessellation is everything to a rendering pipeline. I'd gladly objectively prove to you that it's lighting/shading instead.
I think you keep missing my point. The underlying technology and how many compute cycles are being spent in what are irrelevant for the general audience. It doesn't matter if the rocks are supposedly static (like 99% of the geometry in videogames except maybe for characters, so point being?) or how much tessellation there is.

You can't change the fact that Demon's Souls is at the moment the best-looking 9th-gen console title according to critical acclaim, while missing out on raytracing entirely.
There's also a good number of sources claiming the dev teams of the big engine makers (idTech, Frostbite, etc.) got into disarray the day that UE5 demo released to the public and they all started working on similar approaches right away. It's also a fact that it got into mainstream media news worldwide, despite how little you apparently think of it.
And these were all done without calculating a single BVH intersection.


Compared to these mass reactions, it matters very little that you're able to pinpoint inaccuracies in any of the implementations.


To both claim to be a reputable responsible professional , and in complete opposition behave the way you do here.
AFAIK VFXVeteran VFXVeteran started in the movie industry and is now in the movie industry still, not game industry. I don't know if he's worked in games in the past.
I do enjoy his analyses but the conversation at hand isn't really technical anyways.
 
Last edited:

VFXVeteran

Banned
I think you keep missing my point. The underlying technology and how many compute cycles are being spent in what are irrelevant for the general audience. It doesn't matter if the rocks are supposedly static (like 99% of the geometry in videogames except maybe for characters, so point being?) or how much tessellation there is.

It doesn't matter what the general audience thinks concerning these matters. What matters is approximating the rendering equation to be as accurate as it can be. That will objectively give the most accurate tools that artists can make so that their vision is realized.

I mentioned static because it's a limitation in the technique. Godzilla has tremendous amount of triangles with subpixel accuracy and he moves. Just because we have tech that can pull in high amount of triangles into VRAM doesn't mean we are finished with our goal. I'm not satisfied with just tessellated static geometry. We need more GPU power.
You can't change the fact that Demon's Souls is at the moment the best-looking 9th-gen console title according to critical acclaim, while missing out on raytracing entirely.
It is not a fact. I don't want to get into a subjective argument here as it goes nowhere. I've given my analysis of DS and it's factual. The main highlight to that game is the texturing as I stated and proved.

There's also a good number of sources claiming the dev teams of the big engine makers (idTech, Frostbite, etc.) got into disarray the day that UE5 demo released to the public and they all started working on similar approaches right away. It's also a fact that it got into mainstream media news worldwide, despite how little you apparently think of it.
And these were all done without calculating a single BVH intersection.
Dude, I don't know what you want from me. I know about the techniques and I know what makes pretty pictures. From experience, it's more of the rendering equation that makes things look stellar. We can't keep going down the road of 2D screenspace techniques and baked out lighting solutions. It won't look much better than it already does. FS2020 is about the limit in baked solutions that look damn good in lighting/shading - not UE5 or DS.

AFAIK VFXVeteran VFXVeteran started in the movie industry and is now in the movie industry still, not game industry.
I am not in the movie industry. I'm at Lockheed Martin working on realtime systems for missiles/weapons.

You are being a prick trying to discredit my movie industry experience when it, in fact, deals with much more advanced techniques not even seen in the game industry. Literally everyone I've worked with in film is now working at either Nvidia, Intel, Naughty Dog, EA, Apple, etc.. My old producer that I worked with on the Matrix films is now the CTO at EpicGames. Literally every single game engine uses the GGX lighting model from a guy at Disney that I personally worked with. You trying to discredit my take on games with film experience is laughable. The industry doesn't agree with you.
 

Soltype

Member
I think someone mentioned it earlier, the biggest problem with AMD is that they're putting all their eggs in one basket, which is fine if you're crushing it in that one area. The worst part is their pricing, normally this wouldn't be an issue if they had a clear knockout compared to NV, but they're trading blows and have less matures features. The cards need to be $100 less than NVs offerings.
 
In this case, you have both of these, Dirt and Valhalla being AMD games, with big banners on their site. I think its pretty clear that Dirt 5 was doing some nasty stuff under the hood for nvidia.

That's why Nvidia usually does with the black box GameFucks.
When a game is is shipped with an AMD partnership it usually just means that finally, the developers took care to optimize the code to also run well on AMD GPUs, something that they don't usually do.
Do you remember Fiji? It was around when Mantle/Vulkan came out removed some bottlenecks that DX11 had on CCN. Remember how the cards with Fiji GPUs seemed possessed by the devil on Doom? With an unbelievable increase in performance? That's the same thing, just the code running correctly on the AMD architecture.

Back to the topic, Ampere GPUs just are better suited for higher resolutions, it's just that . At lower resolutions it's easier to keep all resources on RDNA/RDNA2 GPUs occupied and the Infinite Cache gives an advantage in effective bandwidth.
 

llien

Member
The cards need to be $100 less than NVs offerings.
No.
AMD has quit that stupid "business" for good.
Love green-blue => stick with it.

Still, whining about pricing is particularly amusing given that GPUs 20% faster than 2080Ti were MSRP-ed at half of its (real) MSRP of 1200.

optimize the code to also run well on AMD GPUs, something that they don't usually do.
Given what GPUs are in consoles, I suspect your take on it is dated.
 
Last edited:

llien

Member
Ampere GPUs just are better suited for higher resolutions
Or rather take a dive at 1440p.

I wouldn't call 10GB GPU suited for higher resolutions.

And as far as perf goes, if you pick 7 most recent games:

kNJfxkj.png


AMD also benefits much more from resizable BAR (perhaps because of lower mem bandwidth)
 

Soltype

Member
No.
AMD has quit that stupid "business" for good.
Love green-blue => stick with it.

Still, whining about pricing is particularly amusing given that GPUs 20% faster than 2080Ti were MSRP-ed at half of its (real) MSRP of 1200.
I buy what makes the most sense, and the 2000 series was a bad buy and RDNA2 is not enticing enough at the price point.I have never stuck with any one brand, you'd be a fool to make purchases based on brands especially in tech.
 
Last edited:

llien

Member
I buy what makes the most sense, and the 2000 series was a bad buy and RDNA2 is not enticing enough at the price point.I have never stuck with any one brand, you'd be a fool to make purchases based on brands especially in tech.
Perception adds to it.
You view superior products (more VRAM, lower power consumption, faster in newer games, also at RT) as inferior.
I think fighting that by lowering prices is not the right way to go.

RDNA2 impact on GPU market was disruptive. It's painfully obvious that NV was forced to drop a tier on its GPUs on top of (!!!) lowering the prices. (3080 with 10GB and 3060 with 12GB, cough)

Current, "the planet is busy with crypto buble/Ponzi Scheme" pricing is an anomaly (5700XT going for 900), I'm talking only about MSRPs.
 

ToTTenTranz

Banned
It doesn't matter what the general audience thinks concerning these matters. What matters is approximating the rendering equation to be as accurate as it can be. That will objectively give the most accurate tools that artists can make so that their vision is realized.

Of course it matters what the general audience finds to be the best/prettiest graphics. In the gaming industry, it's the audience's pocket money that ultimately pays for the devs' salaries.
And the audience isn't remotely interested in how much of what they're seeing is accurate simulation and how much is a trick.


It is not a fact. I don't want to get into a subjective argument here as it goes nowhere.
Demon's Souls being considered the best looking 9th-gen console game by the critics so far isn't a subjective argument because anyone can consult the articles that talk about it.

Gamingbolt on a "Best Graphics of 2020" poll among the website's staff. Winner was Demon's Souls - "We’re barely weeks into the 9th console generation, but with Demon’s Souls, it has already given us one of the best looking games we’ve ever played".
GamingNexus: "In fact, it stands out from the rest as the most visually impressive next-gen launch title."
TechCrunch: As a next-gen gaming experience, however, Demon’s Souls is as yet without comparison. It serves as a showcase not only for the PS5’s graphical prowess, but its sound design, haptics, speed and OS."
ArsTechnica: Seeing all of the aforementioned detail fill a 4K panel with HDR enabled is really unlike any other video game experience I've had this year

How many more statements from critics do you need?
You keep reflecting the public response over how the game looks with its technical qualities and defects. They're two different things.


We can't keep going down the road of 2D screenspace techniques and baked out lighting solutions. It won't look much better than it already does.
We can't keep going down the road of pure rasterization forever, but recent examples show there's a whole lot of performance-effective improvements that can be done before having to raytrace every light source, reflection and shadow.


You are being a prick trying to discredit my movie industry experience when it, in fact, deals with much more advanced techniques not even seen in the game industry.
WTF is wrong with you?
My comment on the movie industry was to explain the guy going after you that since you're not under NDAs for game dev projects you can be at liberty to freely express your opinions, that's all.
I didn't discredit anything, I actually defended you and even complimented your analsyses and this is how you respond?
 
Last edited:
Top Bottom