• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Navi 21 possibly runs at 2.2GHz with 80CUs, Navi 22 at 2.5GHz with 40 CUs

VRAM capacity is more important than memory bandwidth? I'm genuinely curious.

No. Not exactly. But for 4K gaming it just doesn't have enough VRAM moving forward. And for a very capable 4K GPU, this is a problem. How powerful the GPU is, is a mismatch for how much VRAM it has.

When I got my 1080ti 3.5 years ago, its 11GB of VRAM seemed like a luxurious amount. Really allowed a lot of room to grow. Years later there are games starting to pop up which will use nearly that much. This is great. I have never been bottlenecked by the amount of VRAM in my 1080ti.

10GB is "enough" for 2020, but I feel like it won't be sometimes in 2021 and in 2022 that 10GB will be a major bottleneck, even though the rest of the GPU is more than capable. And if I'm going to spend that much $$$ on a GPU, I want to feel like I'm getting a luxurious amount of VRAM.

10GB of VRAM is NOT enough for next gen.

Just wait for the 20GB version. You'll be glad you did. :messenger_sunglasses:
 
Last edited:

BluRayHiDef

Banned
No. Not exactly. But for 4K gaming it just doesn't have enough VRAM moving forward. And for a very capable 4K GPU, this is a problem. How powerful the GPU is, is a mismatch for how much VRAM it has.

When I got my 1080ti 3.5 years ago, its 11GB of VRAM seemed like a luxurious amount. Really allowed a lot of room to grow. Years later there are games starting to pop up which will use nearly that much. This is great. I have never been bottlenecked by the amount of VRAM in my 1080ti.

10GB is "enough" for 2020, but I feel like it won't be sometimes in 2021 and in 2022 that 10GB will be a major bottleneck, even though the rest of the GPU is more than capable. And if I'm going to spend that much $$$ on a GPU, I want to feel like I'm getting a luxurious amount of VRAM.

10GB of VRAM is NOT enough for next gen.

Just wait for the 20GB version. You'll be glad you did. :messenger_sunglasses:

I have no interest in getting an RTX 3080 for that very reason myself; I also have a 1080Ti (I've had it since 2017). I'm actually interested in getting an RTX 3090, because of its buttload of VRAM, which will assure that it'll never be memory starved and which will be useful for my amateur video editing.
 
I have no interest in getting an RTX 3080 for that very reason myself; I also have a 1080Ti (I've had it since 2017). I'm actually interested in getting an RTX 3090, because of its buttload of VRAM, which will assure that it'll never be memory starved and which will be useful for my amateur video editing.

If money is not an issue, do it :messenger_savoring:
 

Senua

Member
Didn't he say 4x RTX performance increase or something? I only read what people post of him, cbf with these 20 min long videos that are prob BS anyway lol
 
Last edited:

bohrdom

Banned
No. Not exactly. But for 4K gaming it just doesn't have enough VRAM moving forward. And for a very capable 4K GPU, this is a problem. How powerful the GPU is, is a mismatch for how much VRAM it has.

When I got my 1080ti 3.5 years ago, its 11GB of VRAM seemed like a luxurious amount. Really allowed a lot of room to grow. Years later there are games starting to pop up which will use nearly that much. This is great. I have never been bottlenecked by the amount of VRAM in my 1080ti.

10GB is "enough" for 2020, but I feel like it won't be sometimes in 2021 and in 2022 that 10GB will be a major bottleneck, even though the rest of the GPU is more than capable. And if I'm going to spend that much $$$ on a GPU, I want to feel like I'm getting a luxurious amount of VRAM.

10GB of VRAM is NOT enough for next gen.

Just wait for the 20GB version. You'll be glad you did. :messenger_sunglasses:

Hmm I've seen this VRAM size complaint brought up quite a few times before. I'm not convinced that this will be a bottle neck for quite some time and by that time you'll most likely be in the market to buy a new GPU.

I think before you're bottlenecked by VRAM size you're gonna be bottlenecked by ROP perf or bandwidth. That's really where rendering tech is struggling right now. You really only need to care about VRAM if you're using it for workstation related work.
 
Last edited:

Type_Raver

Member
Bring. It. On!

Since my EVGA 3080 ftw3 Ultra is taking its sweet time to arrive (likely early nov), that navi 21 just might be the alternative!

Just really curious to see how ray tracing performs.
 
Last edited:
Please link to the videos he said that in as I've never heard him say any of this.


"Even the lower-tier Ampere cards could vastly outperform Turing cards" when talking about ray-tracing which we know is bullshit. "Ampere cards can perform intersection at 4x the speed of Turing" Right.
"In Minecraft RTX, GA102 was 4-5x faster than Titan RTX."

Also some bullshit about DLSS 3.0 and in another vid before this one, he basically claimed Ampere using RT incurred almost no performance loss or none at all.

He's full of shit.
 
Hmm I've seen this VRAM size complaint brought up quite a few times before. I'm not convinced that this will be a bottle neck for quite some time and by that time you'll most likely be in the market to buy a new GPU.

I think before you're bottlenecked by VRAM size you're gonna be bottlenecked by ROP perf or bandwidth. That's really where rendering tech is struggling right now. You really only need to care about VRAM if you're using it for workstation related work.

Disagree. Ampere won't be bottlenecked by those things for a long long time. Ampere is swimming in bandwidth. Why would you even bring that up?

There are already games that will use north of 10GB of VRAM and they are doing that now before next gen has actually started. My 1080ti had 11GB of VRAM 3.5 years ago. At the time it felt like a luxurious amount, more than I needed at the time by quite a bit. Really had room to grow. I've never felt my 1080ti has been bottlenecked by the amount of VRAM it has. A 10GB 3080 has absolutely no room to grow.

10GB is "enough" for gaming in 2020.

10GB will be a noticeable limitation in some/many crossgen games in 2021

10GB of VRAM will be a major limitation for the 3080 in 2022 while the rest of the GPU is still extremely capable.

That's my prediction.

If you replace your GPU every year or two then maybe this isn't an issue. If you want and expect your $700/800 GPU to last as long as possible ... then just wait for the 20GB 3080. You could use the 20GB version for all of next gen if you really wanted to stretch it.
 

CrustyBritches

Gold Member
Thugnificient Thugnificient

Concerning GA102 he stated:

-Card will boost above 2.2GHz
-5376 CUDA cores
-18Gbps memory
-4-5x the RT performance
-2080ti with Ampere RT cores would do RT 4x faster
-Doubling of tensor cores
-The entire high-end will use [TSMC]7nm EUV
-Nvidia will enable DLSS 3.0 by default, overriding settings for benchmarking sites

Also:
-3060 could perform like a 2080ti
-Turing will age like Kepler

tenor.gif
 

bohrdom

Banned
Disagree. Ampere won't be bottlenecked by those things for a long long time. Ampere is swimming in bandwidth. Why would you even bring that up?

It's swimming in bandwidth relative to the competition. Bandwidth is still almost always the bottleneck in high throughput systems. Manufacturers hide this issue by adding more VRAM to compensate for a lack of bandwidth between the VRAM and ROPs. If you want I can explain this in a bit more detail.

There are already games that will use north of 10GB of VRAM and they are doing that now before next gen has actually started. My 1080ti had 11GB of VRAM 3.5 years ago.

Allocation vs Utilization. People meme this on gaf but it's actually relevant to this conversation. Your 1080 Ti has 11 GB of memory but it still underperforms compared to a 2070 Super/2080 which has 8 GB of memory. That means the throughput issues really aren't in the VRAM size.
 
It's swimming in bandwidth relative to the competition. Bandwidth is still almost always the bottleneck in high throughput systems. Manufacturers hide this issue by adding more VRAM to compensate for a lack of bandwidth between the VRAM and ROPs. If you want I can explain this in a bit more detail.



Allocation vs Utilization. People meme this on gaf but it's actually relevant to this conversation. Your 1080 Ti has 11 GB of memory but it still underperforms compared to a 2070 Super/2080 which has 8 GB of memory. That means the throughput issues really aren't in the VRAM size.

In what way does the 1080ti underperform compared to a 2070Super? (not including raytracing of course) And I'm not talking about performance at all. The 3080 already has the performance. My point is that faster memory is in no way a substitute for more memory.

At a certain point, if you don't have enough VRAM you have to lower settings - settings that the rest of the GPU may be perfectly fine with handling.

This is a question of balance. A 1080ti with 4GB of VRAM would have been hopelessly unbalanced and nearly useless in 2020. It literally wouldn't have been able to play some games. And then there would have been a vast list of games that it "could run" but at much lower settings and resolutions than it was capable, simply because of not having enough VRAM.

This is the situation the 3080 will find itself in sooner rather than later.

10GB of VRAM is "enough" in 2020, but we are about to cross into next gen and it wont be enough moving forward. The consoles have potentially more usable VRAM. That's a disaster for the balance and longevity of the 3080.
 

CuNi

Member
10GB of VRAM is "enough" in 2020, but we are about to cross into next gen and it wont be enough moving forward. The consoles have potentially more usable VRAM. That's a disaster for the balance and longevity of the 3080.

Especially in the future it will be even more enough with RTX IO coming etc.
VRAM usually also bloats with AA which you don't need at 4k and is just a waste of resources.
Just because a game "asks" for 12GB VRAM doesn't mean it needs 11GB VRAM. People seem to not understand this.
RE3Remake eats up ~9 to 10GB VRAM on a 2080ti, yet you can easily play this game on a gimped 970 3.5+0.5GB (I know this for a fact, I do it) just as good.
According to RE3Remake it should take up 14GB VRAM with everything on max yet it still runs buttersmooth on the 2080ti and with some lowered settings also on the 970.

People make a way bigger fuzz about 10GB than it really is, especially since they always turn on AA as high up as possible, just further eating away VRAM and yet still gaming just fine.
 

bohrdom

Banned
In what way does the 1080ti underperform compared to a 2070Super?

Apologies. I should have checked if it was 2070S/2080S. I meant 2080 Super. It has 8 gb of VRAM.

My point is that faster memory is in no way a substitute for more memory.

My point is that it doesn't matter if you have more memory if your computational units can't consume it fast enough. If you're worried that if a game allocates 10+ gigs of vram, don't worry, your OS manages that for you. Here's a video that explains it, I've timestamped it.

The only reason why I'm pushing back on this is so that people are informed and don't pay more than they have to get a dope gaming experience. A 20 gig card is gonna cost a fortune because GDDR6X is expensive and they don't wanna cannibalize 3090 sales.
 
Last edited:

smbu2000

Member
Thugnificient Thugnificient

Concerning GA102 he stated:

-Card will boost above 2.2GHz
-5376 CUDA cores
-18Gbps memory
-4-5x the RT performance
-2080ti with Ampere RT cores would do RT 4x faster
-Doubling of tensor cores
-The entire high-end will use [TSMC]7nm EUV
-Nvidia will enable DLSS 3.0 by default, overriding settings for benchmarking sites

Also:
-3060 could perform like a 2080ti
-Turing will age like Kepler

tenor.gif
Don’t the leaked Galax slides of the rtx roadmap slides show the 3070 as being between the 2080S and the 2080ti? It seems highly unlikely the 3060 would be even close to the 2080ti.
https://www.kitguru.net/components/...ked-roadmap-shows-rtx-3080-20gb-and-rtx-3060/
It seems like the 3080 cards don’t really get much benefit from overclocking as they are already highly clocked out of the box. The 2000 series cards can OC very well and receive a greater benefit from it compared to 3000.
 
Could a Custom Navi 31 (80CUs with identical configuration as Navi 21; possibly RDNA3) be the GPU of a hypothetical PS5 PRO coming out in Nov 2022 ? Just wondering if this would be feasible in terms of cost and heating dissipation in two years from now.

Maybe. But I'm kinda thinking both Sony and MS aren't going to be "Pro" like mid-gen refreshes the PS4 Pro and One X were. For one, they may want a bigger technological impact for PS6/Series X-2 and Pro model mid-gen refreshes temper that.

Secondly, these GPUs are probably on 7nm EUV, and I'm guessing the consoles are 7nm DUV enhanced (or at least PS5 is). Plus, these 2.2 GHz and 2.5 GHz are Boost Clock settings, and scaling of power to frequency is not a 1:1 linear match, as we already know for PS5 from Cerny (10% power reduction for 2% frequency reduction = 5:1 ratio). I'd shiver to think the amount of power these Navi GPUs will be consuming at their Boost clocks, even with the power consumption reduction (which I'd peg at 30%).

So basically, it might be unrealistic to expect a PS5 Pro doubling the base's TF even in 2023 or 2024, unless AMD make some insane advances in 3D die packaging (an area they seem to be behind when compared to Intel). Even then, it'd have to be on 5nm EUVL, with some absolutely aggressive performance gains via state-of-the-art 3D stacking methods like POP (package-on-package), etc., to stay in a decent console TDP budget (somewhere around 200 watt TDP for the whole package).

I honestly don't see that happening by 2023 or 2024 while staying affordable; Sony and MS will have to thinking smarter for the mid-gen refreshes this time around. They can't count on basic Moore's Law and node shrinks alone.

EDIT: Oh, you said two years from now? 2022? No way, forget it. Not possible anywhere near that soon.

10GB is "enough" for gaming in 2020.

10GB will be a noticeable limitation in some/many crossgen games in 2021

10GB of VRAM will be a major limitation for the 3080 in 2022 while the rest of the GPU is still extremely capable.

That's my prediction.

TBF, if a lot of games didn't resort to using the VRAM as a cache (or reserving it as future cache), then that'd free up a lot of that 10 GB to actually be used for highly pertinent graphics data, rather than needing stuff from many seconds out stored in as cache.

That's one of the really good things about PS5, Series X and Series S, in that they're looking to free up the need for data caching in VRAM by resolving bottlenecks in the I/O subsystems and structures. And once this proliferates on PC more predominantly (thanks to stuff like RTX I/O and DirectStorage), then you'll start seeing games outside of the immediate console ecosystems taking advantage of this as well.
 
Last edited:

Mhmmm 2077

Member
I'm so damn hyped + the twitter info that it's not gonna be a paper launch, but indeed they will be prepared... I just hope it launches the first week of November, so I can build my new PC and have it ready for Cyberpunk 2077.
 
The 3070 on average will be a little slower than the 2080 Ti I predict, Nvidia are keeping it secret as they want to try and spoil AMD's RDNA2 party on the 29th, possibly cutting the price as I think AMD will have something faster than the 3070 for $450.
 
Apologies. I should have checked if it was 2070S/2080S. I meant 2080 Super. It has 8 gb of VRAM.



My point is that it doesn't matter if you have more memory if your computational units can't consume it fast enough. If you're worried that if a game allocates 10+ gigs of vram, don't worry, your OS manages that for you. Here's a video that explains it, I've timestamped it.

The only reason why I'm pushing back on this is so that people are informed and don't pay more than they have to get a dope gaming experience. A 20 gig card is gonna cost a fortune because GDDR6X is expensive and they don't wanna cannibalize 3090 sales.

I've heard the allocation vs actual usage argument and I'm not against it. The problem is that allocation is easy to measure while actual use is not. How much VRAM is a game actually using? It seems that's not an easy thing to measure.

If a game is allocating 10GB of VRAM, how much is it actually using? We don't know and that doesn't seem like it's a thing Digital Foundry investigates and developers don't commonly share that info either.

Would a 3080 with 8GB of VRAM be a problem? How about 4GB? How about 1GB? At some point it absolutely would be. And noone can tell me how much VRAM next gen games will "actually use"

So in this situation would more not still be better?

If 10GB is more than enough and we shouldn't worry about 10GB, even though we are about to switch over to next gen, then why did they give the 1080ti 11GB... 3.5 years ago?

Based on the fact that we can't know...

1. How much VRAM games actually use (today)
2.. How much VRAM game will actually use (both in the cross gen period and then again full next gen)

... I'd rather have a 20GB 3080... even if it costs more. I'm willing to pay.
 

Rickyiez

Member
Outside of 4k it'll be fine . DLSS reduce VRAM requirement even more so and it will be more widespread next gen .
 

longdi

Banned
Is this >2.1ghz game or boost clock?

Imo sounds like boost clocks, aka good luck running games sustained at that kind of clockspeeds.
 

smbu2000

Member
I've heard the allocation vs actual usage argument and I'm not against it. The problem is that allocation is easy to measure while actual use is not. How much VRAM is a game actually using? It seems that's not an easy thing to measure.

If a game is allocating 10GB of VRAM, how much is it actually using? We don't know and that doesn't seem like it's a thing Digital Foundry investigates and developers don't commonly share that info either.

Would a 3080 with 8GB of VRAM be a problem? How about 4GB? How about 1GB? At some point it absolutely would be. And noone can tell me how much VRAM next gen games will "actually use"

So in this situation would more not still be better?

If 10GB is more than enough and we shouldn't worry about 10GB, even though we are about to switch over to next gen, then why did they give the 1080ti 11GB... 3.5 years ago?

Based on the fact that we can't know...

1. How much VRAM games actually use (today)
2.. How much VRAM game will actually use (both in the cross gen period and then again full next gen)

... I'd rather have a 20GB 3080... even if it costs more. I'm willing to pay.
Well there are actually current games that can use quite a lot of vram at 4k. (I think it was Doom Eternal?)
Last month Hardware Unboxed looked at the 2080 vs 1080ti vs 1080.
The 2080 generally outperforms the 1080ti, but in some cases at 4k the 8GB 2080 performs worse because of the lack of vram compared to the 11GB on the 1080ti.
I can only imagine vram requirements are going to increase as the generation continues on.



I have an 11GB 2080ti myself and I’m thinking the 11GB should be good for at least the beginning of this new generation. At least until I replace it with a newer card later on.
 
Not Amd's.
Nvidia usually go past their boost clocks.

If we 1-1, Amd game clocks = Nvidia boost clocks. Sorta.

My Vega64 Nitro+ when running something demanding boosts up to 1700ish range and stays in that vicinity.

No idea if there are other older AMD cards that have issues with maintaining boost? I haven't seen it on mine.

Plus this is an entirely new architecture (RDNA2) so I don't think previous comparisons/trends would apply here. Did RDNA1 have any issues maintaining boost clocks generally?
 

LordOfChaos

Member
Humm, this is kind of setting off my BS sensor so grain of salt and everything, but what if the big expansion on CU count was due to some being dedicated for RT?

 

Ascend

Member
Humm, this is kind of setting off my BS sensor so grain of salt and everything, but what if the big expansion on CU count was due to some being dedicated for RT?

Hm. It's possible that they pack only 20 of the CUs with BVH acceleration, but to me it seems more likely that all of the CUs have it. 20 also seems like such a random number... To reach 20, you'd need to have 5 CUs with RT per shader engine, and even then, every other shader of those five does not have BVH acceleration, since they're dual compute shaders. Or, you have two shader engines with, and two shader engines without RT, and both have 5CUs that are fully functional for RT. Seems weird to me. But we'll see. RT is after all the least we know about.
But the way it was written, to me implies that all 80 could do RT;
"will use ~ 20CUs from the 80CU pool for HWRT "
It sounds more like a software configuration/setting rather than anything else. But we'll see.

22.5 AMD TFLOPS is most likely equivalent to 3070 20TFLOPS. Benches will most likely prove this.
This is the one time where I expect AMD TFs to be superior to nVidia ones. If AMD reaches ~22TF, it should go toe to toe with the 30TF of the 3080.
 
Last edited:
Hm. It's possible that they pack only 20 of the CUs with BVH acceleration, but to me it seems more likely that all of the CUs have it. 20 also seems like such a random number... To reach 20, you'd need to have 5 CUs with RT per shader engine, and even then, every other shader of those five does not have BVH acceleration, since they're dual compute shaders. Or, you have two shader engines with, and two shader engines without RT, and both have 5CUs that are fully functional for RT. Seems weird to me. But we'll see. RT is after all the least we know about.
But the way it was written, to me implies that all 80 could do RT;
"will use ~ 20CUs from the 80CU pool for HWRT "
It sounds more like a software configuration/setting rather than anything else. But we'll see.


This is the one time where I expect AMD TFs to be superior to nVidia ones. If AMD reaches ~22TF, it should go toe to toe with the 30TF of the 3080.

I'm not sure you're aware, but nVidia's architecture is 7nm 8nm also. AMD does well when everyone else isn't on the same page, like intel. But even then, it's not like AMD is blowing intel's 14nm CPU out of the water in terms of performance. Are they cheaper at the moment? certainly. But they can't compete against better engineering.

In the end, AMD's 22.5TFLOPs will most likely equate to nVidia's 3060TI - 3070, but I doubt it will come close to the same performance and efficiency of the 30 TFLOP 3080. Efficiency is AMD's biggest hurdle.
 
Last edited:
Just so people know, 7nm isn't necessarily 8nm, neither is 8nm necessarily 8nm. The takeaway is that NVIDIA contracted Samsung who are a lot worse than TSMC. If they had TSCM 7nm instead of garbage Samsung, you'd see higher clocks and I bet the 3080 crashing at 1900MHz+ wouldn't be happening.
 

Ascend

Member
In the end, AMD's 22.5TFLOPs will most likely equate to nVidia's 3060TI - 3070, but I doubt it will come close to the same performance and efficiency of the 30 TFLOP 3080. Efficiency is AMD's biggest hurdle.
All I can say is, compare Turing vs Ampere TF, and then compare RDNA1 TF to Turing TF.
As for efficiency, look at the PS5 and XSX.

But they can't compete against better engineering.
jlaw-whtvr.gif
 
I'm not sure you're aware, but nVidia's architecture is 7nm also. AMD does well when everyone else isn't on the same page, like intel. But even then, it's not like AMD is blowing intel's 14nm CPU out of the water in terms of performance. Are they cheaper at the moment? certainly. But they can't compete against better engineering.

In the end, AMD's 22.5TFLOPs will most likely equate to nVidia's 3060TI - 3070, but I doubt it will come close to the same performance and efficiency of the 30 TFLOP 3080. Efficiency is AMD's biggest hurdle.

I think we found Jensen's GAF account!

I think you are pretty much wrong on all accounts, which is fairly spectacular given a lot of this info is readily available.

Ampere is on Samsung's 8nm node, which is actually a rebrand of their 10nm node. This node is objectively worse than TSMC 7nm. TSMC are currently the market leader in silicon wafer production in the world.

The reason that Ampere draws so much power compared to Turing is the node they chose, to cut a long story short Nvidia cheaped out on the node because they didn't want to pay TSMC's asking price. In turn this gives you, the consumer a worse performing product with higher power draw/temps than if Ampere was running on a TSMC 7nm node.

Secondly TFlops are not a measure of gaming performance, they are a measure of certain compute calculations. We've been over this many times in previous threads, if TFlops were a measure of performance then why would a 20something TFlop 3070 only be equal to a 13.45TFlop 2080ti? Why was Vega64 rated with more TFlops than 1080ti but less powerful in gaming performance?

Why would a 30TF 3080 only be around 30% better performance than a 13.45TF 2080ti??

TFlops are not a measure of gaming performance, full stop.

As for performance of RDNA2, nobody really knows where it will land but most evidence is pointing towards them having something competitive with Nvidia on the high end/across the stack so we will have to wait and see how it plays out. Not too long now, end of this month we should know for sure how they stack up, I think you may be unpleasantly surprised!
 
Just so people know, 7nm isn't necessarily 8nm, neither is 8nm necessarily 8nm. The takeaway is that NVIDIA contracted Samsung who are a lot worse than TSMC. If they had TSCM 7nm instead of garbage Samsung, you'd see higher clocks and I bet the 3080 crashing at 1900MHz+ wouldn't be happening.
I was under the impression that was due to 3rd parties using cheap capacitors..

I didn't realize it was 8nm. In any case thanks for the info, but I still think nVidia will edge out AMD again.

I think we found Jensen's GAF account!

I think you are pretty much wrong on all accounts, which is fairly spectacular given a lot of this info is readily available.

Ampere is on Samsung's 8nm node, which is actually a rebrand of their 10nm node. This node is objectively worse than TSMC 7nm. TSMC are currently the market leader in silicon wafer production in the world.

The reason that Ampere draws so much power compared to Turing is the node they chose, to cut a long story short Nvidia cheaped out on the node because they didn't want to pay TSMC's asking price. In turn this gives you, the consumer a worse performing product with higher power draw/temps than if Ampere was running on a TSMC 7nm node.

Secondly TFlops are not a measure of gaming performance, they are a measure of certain compute calculations. We've been over this many times in previous threads, if TFlops were a measure of performance then why would a 20something TFlop 3070 only be equal to a 13.45TFlop 2080ti? Why was Vega64 rated with more TFlops than 1080ti but less powerful in gaming performance?

Why would a 30TF 3080 only be around 30% better performance than a 13.45TF 2080ti??

TFlops are not a measure of gaming performance, full stop.

As for performance of RDNA2, nobody really knows where it will land but most evidence is pointing towards them having something competitive with Nvidia on the high end/across the stack so we will have to wait and see how it plays out. Not too long now, end of this month we should know for sure how they stack up, I think you may be unpleasantly surprised!

fair. Let's see them benches.
 
Last edited:
I was under the impression that was due to 3rd parties using cheap capacitors..
These aren't mutually exclusive. A more advanced node tends to produce a more power-efficient device. This goes hand in hand with the capacitor's ability to store their electricity.
 

rofif

Can’t Git Gud
I wish for amd to compete bit leaks day 3070 performer but not really 3080... It's fine but no dlss and ray tracing support is limited to nvidia in some games
 

Kenpachii

Member
I've heard the allocation vs actual usage argument and I'm not against it. The problem is that allocation is easy to measure while actual use is not. How much VRAM is a game actually using? It seems that's not an easy thing to measure.

If a game is allocating 10GB of VRAM, how much is it actually using? We don't know and that doesn't seem like it's a thing Digital Foundry investigates and developers don't commonly share that info either.

Would a 3080 with 8GB of VRAM be a problem? How about 4GB? How about 1GB? At some point it absolutely would be. And noone can tell me how much VRAM next gen games will "actually use"

So in this situation would more not still be better?

If 10GB is more than enough and we shouldn't worry about 10GB, even though we are about to switch over to next gen, then why did they give the 1080ti 11GB... 3.5 years ago?

Based on the fact that we can't know...

1. How much VRAM games actually use (today)
2.. How much VRAM game will actually use (both in the cross gen period and then again full next gen)

... I'd rather have a 20GB 3080... even if it costs more. I'm willing to pay.

U are spot on with your reactions.

Re2 uses 4gb+ as there are no 5gb cards its 6gb minimum to get everything maxed out on 1080p.

Consoles dedicate 3gb about that on v-ram allocation, pc through higher settings double it as result.

xbox series X uses 10gb of v-ram, double that and 20gb sounds about right a bit higher even at some point. Which explains the 3090 24gb model. Nvidia knows, amd knows, everybody knows. Besides people that bought into 3080's with 10gb they will be burned hard.

Especially in the future it will be even more enough with RTX IO coming etc.
VRAM usually also bloats with AA which you don't need at 4k and is just a waste of resources.
Just because a game "asks" for 12GB VRAM doesn't mean it needs 11GB VRAM. People seem to not understand this.
RE3Remake eats up ~9 to 10GB VRAM on a 2080ti, yet you can easily play this game on a gimped 970 3.5+0.5GB (I know this for a fact, I do it) just as good.
According to RE3Remake it should take up 14GB VRAM with everything on max yet it still runs buttersmooth on the 2080ti and with some lowered settings also on the 970.

People make a way bigger fuzz about 10GB than it really is, especially since they always turn on AA as high up as possible, just further eating away VRAM and yet still gaming just fine.

IO means shit, xbox series X will dedicate 10gb of v-ram no matter what. With io or without io improvements. That means 10gb of v-ram will be required if not higher on PC unless u like to sit with your 700 dollar card below console settings because it runs out of it.

Re2 remake does not run butter smooth on a 970 even remotely without decreasing settings, i dunno about re3 remake never tested it on a 970.

Here's a example, welcome in the hell of running out of v-ram. Your GPU performance means shit at this point anymore.

 
Last edited:

CuNi

Member
Re2 remake does not run butter smooth on a 970 even remotely without decreasing settings, i dunno about re3 remake never tested it on a 970.

Here's a example, welcome in the hell of running out of v-ram. Your GPU performance means shit at this point anymore.

You found a guys Video where he talks about tech issues because he says that stuttering also happens in RE7, nice job proving nothing.
It's even playable on everything max and you can achieve consistent 60 by just turning a few sliders down from ultra to high.


I don't think you ever had a 970 or if you did, I don't know what you did with it that it performed so badly for you.
And that shit had gimped 3.5 + 0.5 like I said. VRAM literally doesn't kill performance as much as people seem to think. You're not going to tank from 120 to 10FPS or so lol.
 

CrustyBritches

Gold Member
RE2 is an example of a game that will allocate more memory than it actually uses. I've done tests with a 1060 6GB and settings that allegedly would require over 12GB and it didn't hitch, stutter, or drop any frames. Same with a 2060S and over 11GB requirement on 8GB VRAM. Still runs just fine.

Off the top of my head, a more recent example of VRAM limitations being hit was the 2060 6GB in Wolfenstein Youngblood with RT. I saw a couple benchmarks where the card doesn't scale in line with the 8GB+ cards like 2060S up and performance completely falls off a cliff.

MS has kinda laid out the guidelines for next-gen Xbox/PC games, and that's ~4GB VRAM for 1440p/High and 10GB VRAM for 4K/Ultra.
 

Rickyiez

Member
Just so people know, 7nm isn't necessarily 8nm, neither is 8nm necessarily 8nm. The takeaway is that NVIDIA contracted Samsung who are a lot worse than TSMC. If they had TSCM 7nm instead of garbage Samsung, you'd see higher clocks and I bet the 3080 crashing at 1900MHz+ wouldn't be happening.

You mean crashing above 2050Mhz

Anything below 2Ghz is rock solid , zero crashes
 
Top Bottom