• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Navi 21 possibly runs at 2.2GHz with 80CUs, Navi 22 at 2.5GHz with 40 CUs

BluRayHiDef

Banned
RE2 is an example of a game that will allocate more memory than it actually uses. I've done tests with a 1060 6GB and settings that allegedly would require over 12GB and it didn't hitch, stutter, or drop any frames. Same with a 2060S and over 11GB requirement on 8GB VRAM. Still runs just fine.

Off the top of my head, a more recent example of VRAM limitations being hit was the 2060 6GB in Wolfenstein Youngblood with RT. I saw a couple benchmarks where the card doesn't scale in line with the 8GB+ cards like 2060S up and performance completely falls off a cliff.

MS has kinda laid out the guidelines for next-gen Xbox/PC games, and that's ~4GB VRAM for 1440p/High and 10GB VRAM for 4K/Ultra.

10GB of VRAM for 4K Ultra means that PC gamers who want to game at those settings should get cards with more than 10GB of VRAM to have some overhead.
 

CrustyBritches

Gold Member
10GB of VRAM for 4K Ultra means that PC gamers who want to game at those settings should get cards with more than 10GB of VRAM to have some overhead.
Ideally that would be the situation. 16GB+ VRAM would be great for future-proofing. For my own use case I probably won't hang onto this upgrade long enough to see the implications of having 8GB or 10GB of VRAM over time I just want something faster for Cyberpunk.

No doubt if I could pay a $100 premium over a 3070 8GB and get 16GB instead I would.
 

Marlenus

Member
Navi 21 likely to perform between 3070 and 3080

Too many unknowns to make any kind of inference of performance.

Provided good CU scaling and no memory bottlenecks if an 80CU card gets similar FPS/Tflop vs the 5700XT then it will beat the 3080.

If it struggles with CU scaling like all the 60+ CU GCN parts did then yea, maybe a bit behind 3080.

We won't know until more info becomes available.
 

ZywyPL

Banned
If it struggles with CU scaling like all the 60+ CU GCN parts did then yea, maybe a bit behind 3080.

There are no scaling issues in Doom, Battlefield, Forza etc. tho... The biggest issue of the upcoming cads will be, as always, the drivers, until AMD makes anything about it, everything that's not a properly implemented DX12/Vulcan will have the performance butchered.
 

KungFucius

King Snowflake
Why would nvidia put out a high end card without enough vram? This would be remembered by all and harm their image later on. They chose 10gb GDDR6X over more cheaper ram for a reason. Hardware design is a long systematic process where options are considered, evaluated and the optimal solution for the market is chosen. Claiming 10GB will not be enough in a year or 2 when you have no experience with hardware design is foolish and insulting to the people who work their asses off to design the hardware you enjoy. They are experts and they have brought good products to the market consistently that have lasted several years.

Those expecting a 20GB version need to expect to pay at least 200 bucks for it, but likely much more. It will also probably only be on higher end cards. And with the 3090 performance being what it is with more cores, who is going to pay hundreds more for the ram with no performance increase? It would make more sense to go cheaper not then and upgrade to the next gen GPU then go after more vram now in hopes of avoiding upgrading sooner.
 

Papacheeks

Banned
Why would nvidia put out a high end card without enough vram? This would be remembered by all and harm their image later on. They chose 10gb GDDR6X over more cheaper ram for a reason. Hardware design is a long systematic process where options are considered, evaluated and the optimal solution for the market is chosen. Claiming 10GB will not be enough in a year or 2 when you have no experience with hardware design is foolish and insulting to the people who work their asses off to design the hardware you enjoy. They are experts and they have brought good products to the market consistently that have lasted several years.

Those expecting a 20GB version need to expect to pay at least 200 bucks for it, but likely much more. It will also probably only be on higher end cards. And with the 3090 performance being what it is with more cores, who is going to pay hundreds more for the ram with no performance increase? It would make more sense to go cheaper not then and upgrade to the next gen GPU then go after more vram now in hopes of avoiding upgrading sooner.

The design has been talked about to death. It was rushed, and GDDRX is actually was needed because GDDR6 would not give them the bit bus, and bandwidth needed. SO they came up with GDDRX which is like juiced GDDR6 and uses a insane amount of power compared to GDDR6.

Which is why your seeing so many transistors and high power usage.

RDNA 2 is the first step in amd trying to make something more efficient. And that will be seen more in RDNA 3-4. If cache/infinity fabric talk is true with these gpu's in how they use 128mb of l1/l2 cache to reach a higher bus of 384bit from 256bit then in the long term AMD made the better choice in NODE, and design.

Ray tracing will be where AMD fall short, and honestly so far nothing has impressed me ray tracing wise. It looks nice, but no one has really used it outside of mostly ray traced reflections.

10gb is not enough, and the wall of putting more memory on a card is now reaching close to the peak where it starts to become more expensive and not worth it. Which I believe AMD saw with cards like the Radeon VII and Instinct line.

The cost of having all that memory instead of a more efficient bandwidth solution was not going to get better even on a cheap memory solution.

Chiplet design is coming, and making the memory delivery on the video cards more efficient is the start of that journey.

Next couple years is going to be wild.
 
Last edited:

Dampf

Member
Off the top of my head, a more recent example of VRAM limitations being hit was the 2060 6GB in Wolfenstein Youngblood with RT. I saw a couple benchmarks where the card doesn't scale in line with the 8GB+ cards like 2060S up and performance completely falls off a cliff.
And even that can be avoided by simply changing image streaming to high and it runs smooth as butter on a 2060. You don't even compromise texture quality, it's the same as on Über and Ultra. It just changes the amount of fixed memory budget Idtech reserves for itself.

So yes, VRAM is not a concern for next gen, 10 GB on a DX12Ultimate GPU will be enough for 4k and console settings the whole generation, as that is what Microsoft allocates as GPU optimized memory for the 4K console Xbox series X.
However, you might want to get higher graphical fidelity than the consoles on PC in a couple of years, then it's certainly good to have more VRAM. But PC has a key advantage that is DRAM. You see, I think next gen games will get very CPU intensive in the future and that means an increase in CPU memory usage for advanced physics, AI, game logic and so on. It is entirely plausible a CPU intensive next gen game will allocate for example 6 GB for the CPU. There, the 3.5GB slower BUS for games is not enough and it has to get the remaining 2.5GB from the allocated 10 GB GPU optimized memory, meaning you'd only have 7.5GB left as video memory. On PC this doesn't happen as the CPU stuff can just be fed into DRAM. That is a key advantage of PCs dedicated RAM and VRAM.
 
Last edited:

Papacheeks

Banned
And even that can be avoided by simply changing image streaming to high and it runs smooth as butter on a 2060. You don't even compromise texture quality, it's the same as on Über and Ultra. It just changes the amount of fixed memory budget Idtech reserves for itself.

So yes, VRAM is not a concern for next gen, 10 GB on a DX12Ultimate GPU will be enough for 4k and console settings the whole generation, as that is what Microsoft allocates as GPU optimized memory.
However, you might want to get higher graphical fidelity than the consoles on PC in a couple of years, then it's certainly good to have more VRAM. But PC has a key advantage that is DRAM. You see, I think next gen games will get very CPU intensive in the future and that means an increase in CPU memory usage for advanced physics, AI, game logic and so on. It is entirely plausible a CPU intensive next gen game will allocate for example 6 GB for the CPU. There, the 3.5GB slower BUS for games is not enough and it has to get the remaining 2.5GB from the allocated 10 GB GPU optimized memory, meaning you'd only have 7.5GB left as video memory. On PC this doesn't happen as the CPU stuff can just be fed into DRAM. That is a key advantage of PCs dedicated RAM and VRAM.

All of those are old engines at this point. Everything going forward will require either more memory or a better way to increase bandwidth, and bit bus. Soon you will have an I/O controller on a gpu board at some point.

Thats when we see big changes. Right now 10gb looks like enough because you drank the Kool-aid that NVIDIA, and others are selling you. It's not.

You'll see how that outlook wont age well with the newest engines come next year and beyond. Everything people are using as comparison is super outdated. Case in point is Flight simulator. That is the new benchmark.

And new battlefield will be too. Your going to see high VRAM usage and SSD requirements for sure on that title.
 

Dampf

Member
All of those are old engines at this point. Everything going forward will require either more memory or a better way to increase bandwidth, and bit bus. Soon you will have an I/O controller on a gpu board at some point.

Thats when we see big changes. Right now 10gb looks like enough because you drank the Kool-aid that NVIDIA, and others are selling you. It's not.

You'll see how that outlook wont age well with the newest engines come next year and beyond. Everything people are using as comparison is super outdated. Case in point is Flight simulator. That is the new benchmark.

And new battlefield will be too. Your going to see high VRAM usage and SSD requirements for sure on that title.
The memory requirements won't change much because it's exactly as you said, those are old engines. And these old games are programmed with slow hard drives and outdated I/O techniques in mind where you NEED to have every occasion possible in RAM/VRAM because you have seek times and very slow bandwidth on a HDD. Listen to what Mark Cerny said at his road to PS5 presentation.

The next generation jump in texture fidelity comes from techniques like DirectStorage and Sampler Feedback, not from an increase in VRAM. It is why we have a very modest jump in RAM this generation compared to others.

Also I'm not sure what you want to say about Flight Simulator. Flight Simulator allocates all of the available GPU memory but it doesn't exactly need it, as TechPowerUp demonstrated by using a 16 GB and a 11 GB GPU. The 11 GB was completely allocated but it had the same frame pacing as the 16 GB GPU. You can even check real memory usage in Flight Simulators development console and its significantly less than what MSI Afterburner reports. So there's that.

B5581Tn.png
 
Last edited:

Papacheeks

Banned
The memory requirements won't change much because it's exactly as you said, those are old engines. And these old games are programmed with slow hard drives and outdated I/O techniques in mind where you NEED to have every occasion possible in RAM/VRAM because you have seek times and very slow bandwidth on a HDD. Listen to what Mark Cerny said at his road to PS5 presentation.

The next generation jump in texture fidelity comes from techniques like DirectStorage and Sampler Feedback, not from an increase in VRAM. It is why we have a very modest jump in RAM this generation compared to others.

Also I'm not sure what you want to say about Flight Simulator. Flight Simulator allocates all of the available GPU memory but it doesn't exactly need it, as TechPowerUp demonstrated by using a 16 GB and a 11 GB GPU. The 11 GB was completely allocated but it had the same frame pacing as the 16 GB GPU. You can even check real memory usage in Flight Simulators development console and its significantly less than what MSI Afterburner reports. So there's that.

B5581Tn.png

Yes, they will.

And if NVIDIA was so confident with 10gb, then why are they holding out different model variants with more memory?

Usage will increase as the engines change, We dont have a solution currently on PC for what PS5 is doing with it's use of the SSD as virtual memory pool.

Games that come to PC targeting that config, are going to need more memory to compensate for no big virtual memory pool as PC is behind in I/O integration with GPU's.
 

Dampf

Member
Yes, they will.

And if NVIDIA was so confident with 10gb, then why are they holding out different model variants with more memory?

Usage will increase as the engines change, We dont have a solution currently on PC for what PS5 is doing with it's use of the SSD as virtual memory pool.

Games that come to PC targeting that config, are going to need more memory to compensate for no big virtual memory pool as PC is behind in I/O integration with GPU's.

Because AMD could be using VRAM as a selling point and Nvidia wants to compete. That kind of misinformation about VRAM allocation is what leads both companies to release cards with more VRAM. There is a demand for VRAM and both companies try to satisfy that demand. Business.
That's not going to say more VRAM will be useless in the future, however. It is especially great and needed for 8K gaming It will lead to even higher texture fidelity and asset quality in the future far beyond consoles. It's just not accurate to say that VRAM requirements are going to change because of the consoles. Recommended requirements could increase due to the advantage of the ever changing hardware in PCs, but not due to consoles. And a 2024 next gen game with awesome quality will still run great on a 10 GB GPU on console-level settings. A 3080 will be too weak by then to even consider the max PC level graphical settings.

Yes, but there will be soon. DirectStorage will be coming to PCs next year it's data decompression running on the GPU and it feeds video memory related stuff directly into VRAM. It will be using NVMe SSDs as a pool, similar to what Sony does. PCIe DMA has been done on servers for a long time now but it was not implemented on a software level for consumer PCs, that's going to change when Direct Storage hits the PC next year. You might be right for a short time period when consoles already have data decompression and not PC. It is possible the PC will compensate with DRAM.
 
Last edited:

Papacheeks

Banned
BOOM! Infinity Cache is real (sorta)






Thanks.

This is what I was getting at.

NVIDIA does not have infinity fabric, and this is something that leads to chiplet. Using infinity it will be for using a NVME as virtual memory, and once they start putting nvme storage and I/O controllers on the gpu board along with better L2 cache solutions. Your going to see better use of memory.

Something not seen so far in the RTX 3000 line.
 

00_Zer0

Member
Thanks.

This is what I was getting at.

NVIDIA does not have infinity fabric, and this is something that leads to chiplet. Using infinity it will be for using a NVME as virtual memory, and once they start putting nvme storage and I/O controllers on the gpu board along with better L2 cache solutions. Your going to see better use of memory.

Something not seen so far in the RTX 3000 line.
I wonder if developers have to actively develop grames around this extra cache or do all games automatically take advantage of it. If it is something that developers have to implement themselves , then the feature will have a low adoption rate until Nvidia has such features in there cards. Plus, there's no telling how extra cache is going to make up for a card if it's only 256bit bus.
 

Rikkori

Member
NVIDIA does not have infinity fabric, and this is something that leads to chiplet.
Right. That's why they went instead with helping out develop GDDR6X. And this is not something that happens overnight, it's something crucial that takes years of planning & R&D to implement and not at all trivial to realise and in fact not guaranteed to succeed at (taken straight from the horse's mouth - their CTO interview recently for data/compute), which luckily for AMD they had done plenty of such work on the CPU side. No doubt Nvidia is working on their own things because they are an R&D behemoth as well, but who knows how that will end up & when. I mean, just look at Intel, they're bigger than both AMD & NV combined and they still have issues on this side.


I wonder if developers have to actively develop grames around this extra cache or do all games automatically take advantage of it. If it is something that developers have to implement themselves , then the feature will have a low adoption rate until Nvidia has such features in there cards. Plus, there's no telling how extra cache is going to make up for a card if it's only 256bit bus.
No, this would be equivalent to having a higher clock, it's all done at a base level and it's just "sped-up" automagically. Not an accurate description, but it's what it is in spirit. Devs don't have to do anything.
 

Papacheeks

Banned
Right. That's why they went instead with helping out develop GDDR6X. And this is not something that happens overnight, it's something crucial that takes years of planning & R&D to implement and not at all trivial to realise and in fact not guaranteed to succeed at (taken straight from the horse's mouth - their CTO interview recently for data/compute), which luckily for AMD they had done plenty of such work on the CPU side. No doubt Nvidia is working on their own things because they are an R&D behemoth as well, but who knows how that will end up & when. I mean, just look at Intel, they're bigger than both AMD & NV combined and they still have issues on this side.



No, this would be equivalent to having a higher clock, it's all done at a base level and it's just "sped-up" automagically. Not an accurate description, but it's what it is in spirit. Devs don't have to do anything.

I think thats the big reason for them wanting to buy ARM. I heard a while back that they were in the planning phase for chiplet. AMD has already implemented that into their Zen 2 cpu's. And use infinity fabric. They have those same people now working with the radeon group for past year or so.

GDDR6X is going to look stupid soon.

I wonder if amd stick with GDDR6 or with the things happening in HBM, go for HBM3 when that hits production next year. Supposedly HBM3 is what AMD has been working towards in memory solutions for data/enterprise sector.

If RDNA 2 is good, at a great price , with low power consumption compared to previous and NVIDIA. It's going to be apparent of why NVIDIA is willing to buy ARM.

I wonder if developers have to actively develop grames around this extra cache or do all games automatically take advantage of it. If it is something that developers have to implement themselves , then the feature will have a low adoption rate until Nvidia has such features in there cards. Plus, there's no telling how extra cache is going to make up for a card if it's only 256bit bus.

Infinity fabric is controlled automatically by the system. I don't believe engine wise they have to code for it. Because it's for getting final bit bus bandwidth which is not something you have to write instructions for, outside of putting those within the hardware parameters of an engine. usually those are detected automatically and their tools that monitor in the engine will show them analytics of whats using what and how much of the bus is being filled.

I mean I'm guessing, but from my understanding of Unreal 4 that stuff is automatic for the most part.

To your second part of the question, NVIDIA doesn't have anything to counter infinity fabric which is being used in AMD's cpu design. They are implementing it into their gpu's. Look at how it's helped with the CCX latency in their cpu's.


Now apply that in a rudimentary way to their GPU's, using different memory L2/L1 cache's on top of other "chips" or co-processors they may include on their gpu board.

NVIDIA want to buy arm for cpu manufacturing, and to be a one stop shop for devices to use their gpu's. But also to develop something like infinity fabric. They are not a CPU manufacturer, ARM is. Do you see where they are behind?
 
Last edited:

SantaC

Member
Interesting on the 'Infinity cache' that was a big claim that RedGamingTech made a few weeks ago which I was waiting to see was BS or not.

And to see it confirmed as real makes his credibility on AMD GPUs at least go right up.
Infinity cache has been talked about earlier then that
 
Infinity cache has been talked about earlier then that

It was the first I'd heard someone claim it was in RDNA2, although of course his source was the Kopite guy on Twitter iirc but he said he had the news confirmed by him or others that Big Navi is using it.
 

Kenpachii

Member
You found a guys Video where he talks about tech issues because he says that stuttering also happens in RE7, nice job proving nothing.
It's even playable on everything max and you can achieve consistent 60 by just turning a few sliders down from ultra to high.


I don't think you ever had a 970 or if you did, I don't know what you did with it that it performed so badly for you.
And that shit had gimped 3.5 + 0.5 like I said. VRAM literally doesn't kill performance as much as people seem to think. You're not going to tank from 120 to 10FPS or so lol.


Dude he has v-ram issue's holy shit did you even watch the video.

I got a 970 and tested it out on RE2 and v-ram bottleneck the game all day long.

Even 4gb cards have issue's in the game because it goes over it at max settings which is why i stated 6gb cards is what u need for current day requirement to run shit at ultra on 1080p.

That's double the amount of v-ram consoles allocate currently in there current gen games.

And yes when u hit a v-ram wall your fps will drop down considerable towards zero fps because it will straight up hang. That's exactly what it does.
 
Last edited:

CuNi

Member
Dude he has v-ram issue's holy shit did you even watch the video.

I got a 970 and tested it out on RE2 and v-ram bottleneck the game all day long.

Instead of acting like a smartass know what u talk about. Even 4gb cards have issue's in the game because it goes over it at max settings which is why i stated 6gb cards is what u need for current day limate to run ultra at 1080p. Let alone higher resolutions.

That's double the amount of v-ram consoles allocate currently in there current gen games.

Alright, good to know you didn't even watch the video nor check out the Benchmark-Results in the Image I posted from Techspot.
You can yell "BUT BUT VRAM!!!" all you like, it's not going to change the fact that this card is a 1080p monster and there is plenty of evidence that proves your assumption wrong.
That card runs that game just fine on 3.5+0.5GB VRAM, I even showed you two proofs for my claim.
Your video, which supposedly shows "he has VRAM issues omg!" just proves that you didn't even remotely check what the Video you yourself posted is about. It is not about VRAM issues per se, it is about a general Hardware/Software Issue.
If you would have checked your own Video, you'd see that he has only 2.2GB VRAM allocation, which is still way below the 3.5GB VRAM good memory and even shorter of the full 4GB VRAM the card has so this literally cannot be caused by "not enough VRAM" but is a issue he has and asks for help to solve where his GPU usage goes from 100% to 5% and then ramps back up to 100%. This is not the normal behavior of this card.

Here, just to prove how wrong you are, I booted up the game, threw everything at max, selected DX12 and, oh would you look at that, a average of around 60FPS, while VRAM is constantly at 4GB capacity.

If you disagree with the results, go yell at your GPU or something. I and many many other 970 users enjoy our games running above supposedly "needed VRAM" capacities and still maintaining 60FPS.
 

Kenpachii

Member
Alright, good to know you didn't even watch the video nor check out the Benchmark-Results in the Image I posted from Techspot.
You can yell "BUT BUT VRAM!!!" all you like, it's not going to change the fact that this card is a 1080p monster and there is plenty of evidence that proves your assumption wrong.
That card runs that game just fine on 3.5+0.5GB VRAM, I even showed you two proofs for my claim.
Your video, which supposedly shows "he has VRAM issues omg!" just proves that you didn't even remotely check what the Video you yourself posted is about. It is not about VRAM issues per se, it is about a general Hardware/Software Issue.
If you would have checked your own Video, you'd see that he has only 2.2GB VRAM allocation, which is still way below the 3.5GB VRAM good memory and even shorter of the full 4GB VRAM the card has so this literally cannot be caused by "not enough VRAM" but is a issue he has and asks for help to solve where his GPU usage goes from 100% to 5% and then ramps back up to 100%. This is not the normal behavior of this card.

Here, just to prove how wrong you are, I booted up the game, threw everything at max, selected DX12 and, oh would you look at that, a average of around 60FPS, while VRAM is constantly at 4GB capacity.

If you disagree with the results, go yell at your GPU or something. I and many many other 970 users enjoy our games running above supposedly "needed VRAM" capacities and still maintaining 60FPS.


The video i posted was to demonstrate what happens when u run out of v-ram that was the whole point. And nothing more then that.

U linked a video yourself that showcases the exact v-ram bottleneck i was talking about in a real environment, so good job on that even while u didn't realize it and tried as hard as you could to disprove my statement.

Then u got a lot of "interesting" statements:

1) Such as "VRAM literally doesn't kill performance as much as people seem to think. You're not going to tank from 120 to 10FPS or so lol. "

That's exactly what it does but even worse as that 10 fps will be 0. It will hang until it catches up again. There is nothing worse then hitting a v-ram wall.

2) then u got stuff like this "Here, just to prove how wrong you are, I booted up the game, threw everything at max, selected DX12 and, oh would you look at that, a average of around 60FPS, while VRAM is constantly at 4GB capacity"

- average means nothing, u look at low spikes because that's where bottlenecks happen and v-ram bottlenecks are hard to register for stuff like rtss which will effect benchmark outcomes all day long ( see below video at point 3 2:20 where fps stays at 14 even while it hangs ) .

It's also really depends on the benchmark where it takes place and how much time it takes place on demanding vs non demanding area's which heavily skews averages.

People use benchmarks to see what a gpu does vs another gpu and to give a idea on the performance.

- Then about v-ram the actual v-ram usage isn't measurable through software currently in any useful way, this is why u use hardware and that's also why i stated 6gb is needed for max settings.

3) about the techspot benchmark that proofs my point (2) exactly.

Here's a demonstration of a 1060 at max settings with 3gb of v-ram.



V-ram bottlenecks with hiccups everywhere, to the points it freezes completely. And that area isn't even demanidng + check rtss fps counter when freezes happen.

If you want to know more about v-ram consumption and why 10gb is laughable on a flag ship card u can scroll through my many posts that went on about it and why a 10gb card isn't particulare next gen ready but more a current gen card. I can't be bothered to repeat myself in every topic about it gets annoying and tiring but in short, rtx, io, xboxsx 10gb v-ram, higher settings, higher base settings your typical stuff.

Anyway its about navi and frankly i spend enough time lecturing people for today so i won't be going further in on reactions i don't think its practiculaire useful at this point anymore on this subject.
 
Last edited:

CuNi

Member
The video i posted was to demonstrate what happens when u run out of v-ram that was the whole point. And nothing more then that.

U linked a video yourself that showcases the exact v-ram bottleneck i was talking about in a real environment, so good job on that even while u didn't realize it and tried as hard as you could to disprove my statement.

Then u got a lot of "interesting" statements:

1) Such as "VRAM literally doesn't kill performance as much as people seem to think. You're not going to tank from 120 to 10FPS or so lol. "

That's exactly what it does but even worse as that 10 fps will be 0. It will hang until it catches up again. There is nothing worse then hitting a v-ram wall.

2) then u got stuff like this "Here, just to prove how wrong you are, I booted up the game, threw everything at max, selected DX12 and, oh would you look at that, a average of around 60FPS, while VRAM is constantly at 4GB capacity"

- average means nothing, u look at low spikes because that's where bottlenecks happen and v-ram bottlenecks are hard to register for stuff like rtss which will effect benchmark outcomes all day long ( see below video at point 3 2:20 where fps stays at 14 even while it hangs ) .

It's also really depends on the benchmark where it takes place and how much time it takes place on demanding vs non demanding area's which heavily skews averages.

People use benchmarks to see what a gpu does vs another gpu and to give a idea on the performance.

- Then about v-ram the actual v-ram usage isn't measurable through software currently in any useful way, this is why u use hardware and that's also why i stated 6gb is needed for max settings.

3) about the techspot benchmark that proofs my point (2) exactly.

Here's a demonstration of a 1060 at max settings with 3gb of v-ram.



V-ram bottlenecks with hiccups everywhere, to the points it freezes completely. And that area isn't even demanidng + check rtss fps counter when freezes happen.

If you want to know more about v-ram consumption and why 10gb is laughable on a flag ship card u can scroll through my many posts that went on about it and why a 10gb card isn't particulare next gen ready but more a current gen card. I can't be bothered to repeat myself in every topic about it gets annoying and tiring but in short, rtx, io, xboxsx 10gb v-ram, higher settings, higher base settings your typical stuff.

Anyway its about navi and frankly i spend enough time lecturing people for today so i won't be going further in on reactions i don't think its practiculaire useful at this point anymore on this subject.


I literally send you a video where the gameplay is smooth and 99% free of microstutter and when it occurs it's either area change which is acceptable or literally is due to framerate when it jumps between 60 and 45 ish in some bad areas.
That's still with everything on ultra and on DX12 it runs even smoother. If you turn down a few setting so VRAM "only" needs 9 to 10GB this runs without any hiccups or stutter all day long so literally get lost with your "vram bottleneck" bullshit. I posted you Videos that prove you wrong, Benchmarks with 1% low still being above 50FPS and 0.1% low being still above 30FPS, yet you still talk shit about "educating" people because you can't deal with the fact that 10GB is hell of enough for gaming and most VRAM Game estimates are completely oversized to be "on the safe side of things".
Paying for more VRAM on a lower performing card is just as much stupidity as only focusing on performance and ignoring VRAM. You need to strike the golden middle. You could have all the VRAM in the world but if the card cant render images fast enough you'll still be stuck at low FPS. The Radeon VII with its 16GB HBM got outperformed by a 2070 super with only 8GB VRAM. Investing money into VRAM that you never use is quit the way to go.
 

Papacheeks

Banned
I literally send you a video where the gameplay is smooth and 99% free of microstutter and when it occurs it's either area change which is acceptable or literally is due to framerate when it jumps between 60 and 45 ish in some bad areas.
That's still with everything on ultra and on DX12 it runs even smoother. If you turn down a few setting so VRAM "only" needs 9 to 10GB this runs without any hiccups or stutter all day long so literally get lost with your "vram bottleneck" bullshit. I posted you Videos that prove you wrong, Benchmarks with 1% low still being above 50FPS and 0.1% low being still above 30FPS, yet you still talk shit about "educating" people because you can't deal with the fact that 10GB is hell of enough for gaming and most VRAM Game estimates are completely oversized to be "on the safe side of things".
Paying for more VRAM on a lower performing card is just as much stupidity as only focusing on performance and ignoring VRAM. You need to strike the golden middle. You could have all the VRAM in the world but if the card cant render images fast enough you'll still be stuck at low FPS. The Radeon VII with its 16GB HBM got outperformed by a 2070 super with only 8GB VRAM. Investing money into VRAM that you never use is quit the way to go.

Your using a game that is on a optimized build of an engine meant to play on consoles running gpu's from 2013, as a barometer to disprove the VRAM bottle neck? At 1080p at that? We are all talking about 4k.

None of those assets are running at 4k nor were they built from 4k assets. Everything going forward will be using 4k+ assets, where tons of downsampling is going to happen, but density of assets like character models will increase 10 to 100 fold. Your only thinking about textures, filters, and current tessellation.

There are games that have 4k assets, that show use of vram way beyond 8gb.

If 10gb was enough why do both consoles have over that to play with? Even if you set aside 3-4gb's for OS.

Why did Sony see this as being the big limiter and opp to use an SSD with it's own custom I/O solution to mitigate re-filling of the memory?


There are better sources out there than some random youtube video running a old ass card on a game designed to run on said old ass video card.
 
The memory requirements won't change much because it's exactly as you said, those are old engines. And these old games are programmed with slow hard drives and outdated I/O techniques in mind where you NEED to have every occasion possible in RAM/VRAM because you have seek times and very slow bandwidth on a HDD. Listen to what Mark Cerny said at his road to PS5 presentation.

The next generation jump in texture fidelity comes from techniques like DirectStorage and Sampler Feedback, not from an increase in VRAM. It is why we have a very modest jump in RAM this generation compared to others.

Also I'm not sure what you want to say about Flight Simulator. Flight Simulator allocates all of the available GPU memory but it doesn't exactly need it, as TechPowerUp demonstrated by using a 16 GB and a 11 GB GPU. The 11 GB was completely allocated but it had the same frame pacing as the 16 GB GPU. You can even check real memory usage in Flight Simulators development console and its significantly less than what MSI Afterburner reports. So there's that.

B5581Tn.png

This is interesting. I can see the resolution but what can you tell me about the settings?

Also, if you took this screenshot, where are you? Are you flying over a large city or are you sitting in the hangar?
 

CuNi

Member
Your using a game that is on a optimized build of an engine meant to play on consoles running gpu's from 2013, as a barometer to disprove the VRAM bottle neck? At 1080p at that? We are all talking about 4k.

Saying "it uses a Engine that was meant to play on consoles from 2013" is meaningless as those things are build to scale.
If engines would not scale, then next gen would also be like 2015 consoles because we have Series S and Xbox One X that still will get games each and they are by far not as powerful as Series X and PS5 so either you have to agree that engines scales or if they don't then next gen games will be build to run on Series S first and then "ported" to higher end Versions.

Spoiler: We know it scales.

If 10gb was enough why do both consoles have over that to play with? Even if you set aside 3-4gb's for OS.

Consoles have 16GB RAM which is both used as VRAM and RAM, where PCs have dedicated VRAM and dedicated RAM, so saying Consoles have more than 10GB is meaningless because all we know is that 2.5GB is used for OS so that leaves 13.5GB to be split between VRAM and normal RAM. Games will obviously split differently so we cannot say definitely how much "VRAM" the Console will use and how much will end up being "normal RAM" etc.
On top of that, Series X and PS5 use GDDR6 where the 3080 uses GDDR6X which has a even higher IO-Rate and Bandwidth.

Why did Sony see this as being the big limiter and opp to use an SSD with it's own custom I/O solution to mitigate re-filling of the memory?

Sony, Microsoft, Nvidia and probably also AMD did move because not the Memory was the issue but the bandwidth. Because if you increase Bandwidth and can stream in assets quickly, then VRAM does not need to increase by as much because you don't need to hold all the Textures, Models etc that you MIGHT NEED but you have to only hold WHAT YOU REALLY NEED. That's why the new consoles use SSDs for higher bandwidth. That's why Nvidia made RTX I/O and that's why AMD will probably also have some technology of those sorts. If not then they will need a big amount of more VRAM to run comparably to a 3080 with less memory.

There are better sources out there than some random youtube video running a old ass card on a game designed to run on said old ass video card.

Like for example Doom Eternal where the 1080ti with more VRAM performs worse than the RTX 2080 super.
VRAM is not everything. Ofc having way to little will impact performance, but "too little" is 6GB or less.
Obviously there are no "viewable" examples of 3070 performance, but we know it will roughly perform equally or better than a 2080ti, while having 3GB less VRAM.
The 3080 has less VRAM than the 2080ti also and yet it runs roughly 25% or more % better at 4k. How can this be?

It's because like I said games like to allocate more than they actually use. With higher bandwidth the impact of a "VRAM bottleneck" gets smaller and smaller because the time the GPU needs to wait for the data gets reduced as well.
You can be sure RTX I/O will be used by all the big blockbuster triple A titles out there since nvidia is good at moneyhatting features into games so the games that could show a VRAM bottleneck will be optimized to run VRAM as efficiently as possible in the future. I guarantee you, RTX 4000 Series will probably only have 12GB VRAM as well. The next big jump will happen when the narrative will shift from 4k to 8k which is equally stupid imho.

Edit:
Just to make it clear, more VRAM will always be nice, but saying it is a gimped card or not future proof or whatever is just equally wrong.
In my own personal opinion I am pretty confident that the RTX 3080 will perform better across the board than big Navi etc. which will probably have 16GB of VRAM at 4k because of all the tech we have in place to not need to increase VRAM by big margins anymore. RTX IO is coming, we have DLSS and we have GDDR6X, which all either increase bandwidth, reduce the time the GPU waits for needed VRAM data or reduce the overall needed VRAM by using AI uspscaling. And lets face it, those things are here to stay and will only get improved as time goes on.
 
Last edited:

Dampf

Member
This is interesting. I can see the resolution but what can you tell me about the settings?

Also, if you took this screenshot, where are you? Are you flying over a large city or are you sitting in the hangar?
I did not take these screenshots and I don't have the game currently, sadly.

I agree it would be very interesting to test this in big demanding cities like New York.
 
Last edited:
Thats when we see big changes. Right now 10gb looks like enough because you drank the Kool-aid that NVIDIA, and others are selling you. It's not.
Nvidia has been selling GPUs with less VRAM than AMD for a very long time now... I always thought it would hit their clients at some point, but man these GPUs always remained good enough for however long they were needed (the low end 8gb Radeon 570 will never run a game at high enough settings to take advantage of its memory pool, the only use for it it crypto currency mining).

By the time you would need that 10gb on the 3080 it's GPU will not be fast enough to handle the games in settings that require that amount of memory anyway.
 

Papacheeks

Banned
Nvidia has been selling GPUs with less VRAM than AMD for a very long time now... I always thought it would hit their clients at some point, but man these GPUs always remained good enough for however long they were needed (the low end 8gb Radeon 570 will never run a game at high enough settings to take advantage of its memory pool, the only use for it it crypto currency mining).

By the time you would need that 10gb on the 3080 it's GPU will not be fast enough to handle the games in settings that require that amount of memory anyway.

The reason why the needle hasn't moved that far in terms of needed memory bandwidth, is because of asset quality. Most developers are not using movie quality assets and scaling down. A lot of them are making assets with much lower pixel density with textures taking the bulk in terms of being made at high fidelity.

Like the Unreal 5 demo was using movie quality assets and you could tell. Next gen is going to start showing that kind of quality once newer engines are finished. Some developers are using current engines, but are going to increase the quality of assets used in their games from models, to textures, to dynamic lighting systems.

Mainly the quality of assets they are using is going to increase 10 fold, and we will see that starting next year.

Then you will see VRAM usage skyrocket. Which is why NVIDIA is trying to sell DLSS so hard.
 

regawdless

Banned
I'm mainly interested in the raytracing capabilities of these new cards, very curious how AMD will deal with it. When actual benchmarks hit, the internet will be on fire.
 
The reason why the needle hasn't moved that far in terms of needed memory bandwidth, is because of asset quality. Most developers are not using movie quality assets and scaling down. A lot of them are making assets with much lower pixel density with textures taking the bulk in terms of being made at high fidelity.

Like the Unreal 5 demo was using movie quality assets and you could tell. Next gen is going to start showing that kind of quality once newer engines are finished. Some developers are using current engines, but are going to increase the quality of assets used in their games from models, to textures, to dynamic lighting systems.

Mainly the quality of assets they are using is going to increase 10 fold, and we will see that starting next year.

Then you will see VRAM usage skyrocket. Which is why NVIDIA is trying to sell DLSS so hard.


movielike assets huh :)))

Vram cant "skyrocket" since there's only 13 usable gigs for games in the new consoles, shared by the entire system. It can only "skyrocket" to that
 

Papacheeks

Banned
movielike assets huh :)))

Vram cant "skyrocket" since there's only 13 usable gigs for games in the new consoles, shared by the entire system. It can only "skyrocket" to that

There's a reason there are compression, decompression techniques being used currently on PC/CONSOLE. There's a reason NVIDIA and AMD both have solutions for such an increase. But the fact that the consoles have 16gb of sharable memory, and PS5 designed around being able to use the 825gb as one big memory pool to stream assets to the memory.

You'll see by next year when UNreal 5 games start showing off, Frostbytes new updated engine. 4A games has been working on their engine as well from what I've heard.

There's a reason Gureilla games is doing so much work on their engine. They talk to other teams at Sony directly. A lot of their work in what they develop on DECIMA is given to ICE team in terms of what the system is doing under load with their assets, game builds.

Next year and beyond will be the real test. By then more companies will have updated their engines or completed new ones to support this change in asset quality.

you will see movie style assets being used by Naughty dog, santa monic in their newest projects.

There's a reason asset quality was highlighted in the demo, and references to The mandolorian show which used UNREAL 4 for most of it's effects. Imagine that quality from a movie being used in a game?
 

Rikkori

Member
There's a reason asset quality was highlighted in the demo, and references to The mandolorian show which used UNREAL 4 for most of it's effects. Imagine that quality from a movie being used in a game?
Right, just like with dynamic resolution and all sorts of other non-sense they will check the 'technically true' (in lawyer-speak) box but in reality will deliver something subpar compared to what they're advertising.

Compare the sharpness & detail of UE5 Demo's assets when used in UE4 at their actual quality vs what they show in UE5 where they "dynamically scale" these assets:


 

martino

Member
Right, just like with dynamic resolution and all sorts of other non-sense they will check the 'technically true' (in lawyer-speak) box but in reality will deliver something subpar compared to what they're advertising.

Compare the sharpness & detail of UE5 Demo's assets when used in UE4 at their actual quality vs what they show in UE5 where they "dynamically scale" these assets:



imo it's not hard to see the difference
 

Rikkori

Member
imo it's not hard to see the difference
Yes, that's my point. You have clearly better PQ with the old way of doing things (though it's much more expensive performance-wise). What they're bragging about with UE5 is really no different than when consoles bragged about 4K but it was "dynamic", "reconstructed" bla bla.

The real question for me is how it would scale on a real 8K display. Would it still be inferior or would it (the assets) actually show up properly?
 
The reason why the needle hasn't moved that far in terms of needed memory bandwidth, is because of asset quality. Most developers are not using movie quality assets and scaling down. A lot of them are making assets with much lower pixel density with textures taking the bulk in terms of being made at high fidelity.

Like the Unreal 5 demo was using movie quality assets and you could tell. Next gen is going to start showing that kind of quality once newer engines are finished. Some developers are using current engines, but are going to increase the quality of assets used in their games from models, to textures, to dynamic lighting systems.

Mainly the quality of assets they are using is going to increase 10 fold, and we will see that starting next year.

Then you will see VRAM usage skyrocket. Which is why NVIDIA is trying to sell DLSS so hard.

But...movie production rendering farms use many of those same Nvidia GPUs so... :S
 
The XSX has 52 CUs, from a 56 CU chip. So there is currently nothing in the Navi line-up that reflects the XSX GPU. The PS5 would be the 40 CU equivalent, with 4 CUs disabled.

The Xbox x is based on Navi 21 lite,this PC GPU has 56 CU's and is clocked at 2050 MHZ.
The PS5 is based on Navi 22,this PC GPU has 40 CU's and is clocked at 2500 MHZ.
 

VFXVeteran

Banned
All of those are old engines at this point. Everything going forward will require either more memory or a better way to increase bandwidth, and bit bus. Soon you will have an I/O controller on a gpu board at some point.

Thats when we see big changes. Right now 10gb looks like enough because you drank the Kool-aid that NVIDIA, and others are selling you. It's not.

You'll see how that outlook wont age well with the newest engines come next year and beyond. Everything people are using as comparison is super outdated. Case in point is Flight simulator. That is the new benchmark.

And new battlefield will be too. Your going to see high VRAM usage and SSD requirements for sure on that title.

I approve this message. I am able to see a lot of memory usage and activity (allocation, etc..) with my 3090 compared to the 2080TI. 10G of VRAM is barely usable now at 4k. VRAM allocation on current games is almost equal that in a lot of games. Even if the application doesn't exhaust the VRAM (why would they, that would be inefficient use of resources), accruing large memory allocations have been shown to affect framerate. Crysis Remastered right now allocates the most VRAM I've seen in a game when running at "Does it run Crysis" settings at 4k. 15G allocation peak with full SVOGI RT.
 
Last edited:
Top Bottom