• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia responds to GTX 970 memory issue

.

I mean it's not an issue for me yet but I also brought it in mind with it being "future-proof" based on the Vram.

Same here. I still have time to get a refund from amazon (feb 1st) but i don't know what to do, i just built this pc at the beginning of the month and i won't get a 980.
 

Orayn

Member
Future proofing isn't a real thing you guys. It never has been

Yes it is. People can have unreasonable expectations, but the idea of getting a part whose specs and features anticipate the demands of upcoming software is a completely reasonable thing to do.
 

Seanspeed

Banned
So If im reading this right it would be better to have a true 3.5GB VRAM card rather than a 4GB card with a shitty last 0.5GB of usable space, since this space can cause performance hitches etc. when being used.
Trying to run more than 3.5GB on a strictly 3.5GB card would be worse, not better.
 

Reallink

Member
But they haven't done any of that?

Actually they did. In admitting they, by design, quartered off the RAM into 3.5GB and 500MB sections with the first receiving "high priority access" and it is demonstrated the 500MB pool is effectively unusable due a 90% bandwidth reduction, it will be argued Nvidia knew the card was in practice and for all intents and purposes NOT a 4GB card, but they still chose to market and represent it as one.
 

JRW

Member
Trying to run more than 3.5GB on a strictly 3.5GB card would be worse, not better.

I'm not so sure, I figure the game(s) wouldn't attempt to use over the max 3.5GB limit period. For example Shadow of Mordor wouldn't let me use the High texture setting on my GTX 480 1.5GB since it has under 2GB VRAM.
 

Serandur

Member
Nvidia shooting their own foot, etc.

What the fuck were they thinking?

"Those dopes will never figure it out, we own the reviewers, and our fanbase will defend the problem to the death!"


Realistically, they probably consulted with their legal team long ago and were given the go-ahead for these situations as the cards technically* have 4 GBs... even though a good 1/8th of it are as slow as an Xbox 360 GPU's bandwidth.
 

Salaadin

Member
I don't like this at all. Makes me want to switch to AMD.

Sucky thing is that it's late enough where I can't get my money back. I'm stuck with it. It's a great card but I don't like being told lies.
 

Costia

Member
I'm not so sure, I figure the game(s) wouldn't attempt to use over the max 3.5GB limit period. For example Shadow of Mordor wouldn't let me use the High texture setting on my GTX 480 1.5GB since it has under 2GB VRAM.

3.5+0.5. is better than 3.5
For example they could dump the vertex buffers (which afaik are a lot less bandwidth intensive per frame) into that 0.5gb which would free more high bandwidth memory for the texture buffers

More like lack of improvement in games due to developers targeting consoles with 2006 specifications.

Same end result for me, so the reason it worked doesn't really matter. You can say the same about targeting xbone/ps4.
 

Seanspeed

Banned
I'm not so sure, I figure the game(s) wouldn't attempt to use over the max 3.5GB limit period. For example Shadow of Mordor wound't let me use the High texture setting on my GTX 480 1.5GB since it has under 2GB Vram.
Not many games will literally wall you off from settings you cant take full advantage of.

And I'm guessing it wouldn't stop you from using a higher resolution(as the vRAM recommendations are only for 1080p), so you could still manage to demand more vRAM than your card has.
 
Actually they did. In admitting they, by design, quartered off the RAM into 3.5GB and 500MB sections with the first receiving "high priority access" and it is demonstrated the 500MB pool is effectively unusable due a 90% bandwidth reduction, it will be argued Nvidia knew the card was in practice and for all intents and purposes NOT a 4GB card, but they still chose to market and represent it as one.

I haven't been following too closely but where was this [bolded] demonstrated?
 

Makareu

Member
Because an average doesn't take into account stuttering.

60 60 60 60 0 60 60 0 60 60 60 60 60 60 60 0 60 60 60 60 60

Total average is still high but will be awful to play. On average during the course of you life you will not be on fire in fact probably closer to over 99% of your life, so why bother about being on fire for less then 1%! Fun with averages.

Oh look! 970 performs like a 980 in every game benchmark being a lot cheaper. Oh, what a luck no game uses more than 3.5 GB when both gpu were released.

If you actually have a 970 and take the time to test your performance ingame with a vram usage of 3.5+GB, you'll see that performance do not plummet.
It actually is negligible enough that it took 4 months and hundreds of thousands of gpu sold to be noticed.

That being said, there is something a bit shady going on with Nvidia and i'll think twice before purchasing from them in the future.
 

Zane

Member
Yes it is. People can have unreasonable expectations, but the idea of getting a part whose specs and features anticipate the demands of upcoming software is a completely reasonable thing to do.

So what should people get instead? You're acting like a 980 or 290x will suddenly be relevant much longer than the 970 which is silly. These cards will all reach obsolescence at the same time because there's only a maximum of 10% separating their performance from each other. Considering the 980 is 200 bucks more for a gain of 10%, the 970 is a hell of a deal and I'd be hard convinced to say it isn't the best performance per dollar card on the market at the moment. These revelations we're discussing in this thread do not change that.
 

Qassim

Member
I'd say this is like Intel or AMD selling a "quad-core" CPU where one of the cores has dramatically lower performance than the other three and not disclosing that information.

When you say something has four cores, you make a pretty strong implication that they all have the same functionality because that's the standard.

Yeah, that makes sense. I do wonder how they'd advertise something like this however. Because even having that 0.5GB of VRAM for cases that do go over 3.5GB should be better than not having it.
 
Question: I don't really understand what the problem is with Nvidia's tests. They seem to confirm that accessing the last .5 gigs of memory on the 970 does not cause a greater performance hit than accessing the last .5 gigs on the 980. In what sense is the last .5 gigs on the 970 crap then?
 

Xdrive05

Member
Just as speculated. This is a hardware design issue. Scummy for Nvidia to hide this until their customers found it. I already need 4GB for modded Skyrim. I just can't justify buying this card knowing that it's kneecapped like this. Even at 1080p, it's going to hit those limitations with near-upcoming games. Fucking hell.

Hopefully there is large enough of a backlash that the price comes down for it, at the very least.
 

JRW

Member
Not many games will literally wall you off from settings you cant take full advantage of.

And I'm guessing it wouldn't stop you from using a higher resolution(as the vRAM recommendations are only for 1080p), so you could still manage to demand more vRAM than your card has.

You're right it's not a wall like I thought, I just tried the High texture setting despite the warning text saying not enough vram available, it still loaded the game and seems to be running ok (only played for a minute tho).
 

Seanspeed

Banned
Actually they did. In admitting they, by design, quartered off the RAM into 3.5GB and 500MB sections with the first receiving "high priority access" and it is demonstrated the 500MB pool is effectively unusable due a 90% bandwidth reduction, it will be argued Nvidia knew the card was in practice and for all intents and purposes NOT a 4GB card, but they still chose to market and represent it as one.
Nvidia admitted to sectioning off the RAM, but they did not admit to the 0.5GB having 90% reduced bandwidth. That is kind of important. As for the benchmarks that demonstrate it, it seems that certain 970's are showing cutoffs at 3GB and even 980's with cutoffs at 3.5GB. So the way this 90% reduction is being 'demonstrated' is still suspect.
 

Reallink

Member
I haven't been following too closely but where was this [bolded] demonstrated?

It is hypothetical, but my understanding is that is what benchmarks that specifically target the 500MB pool have been showing. I could be mistaken, but I believe I have seen 90% reduction to 19GB/s tossed around.
 

Jarmel

Banned
Nvidia admitted to sectioning off the RAM, but they did not admit to the 0.5GB having 90% reduced bandwidth. That is kind of important. As for the benchmarks that demonstrate it, it seems that certain 970's are showing cutoffs at 3GB and even 980's with cutoffs at 3.5GB. So the way this 90% reduction is being 'demonstrated' is still suspect.

So you're telling me to skip the 900 series.
 
It is hypothetical, but my understanding is that is what benchmarks that specifically target the 500MB pool have been showing. I could be mistaken, but I believe I have seen 90% reduction to 19GB/s

If nvidia's claim that it's within a few percent of the expected performance (in both FPS and frametimes) in games then I think that benchmark is meaningless. I feel like we're wielding a black box without really understanding it.
 
Nvidia admitted to sectioning off the RAM, but they did not admit to the 0.5GB having 90% reduced bandwidth. That is kind of important. As for the benchmarks that demonstrate it, it seems that certain 970's are showing cutoffs at 3GB and even 980's with cutoffs at 3.5GB. So the way this 90% reduction is being 'demonstrated' is still suspect.

Did anyone of those showing 3GB cuttoffs tested properly with headless gpu ?
 

LilJoka

Member
Nvidia admitted to sectioning off the RAM, but they did not admit to the 0.5GB having 90% reduced bandwidth. That is kind of important. As for the benchmarks that demonstrate it, it seems that certain 970's are showing cutoffs at 3GB and even 980's with cutoffs at 3.5GB. So the way this 90% reduction is being 'demonstrated' is still suspect.

That's only because they ran Nai's benchmark incorrectly. Cut off is 3500MB for definite on 970 when running headless mode. No cut off on a 980 in headless mode.
 

Costia

Member
Question: I don't really understand what the problem is with Nvidia's tests. They seem to confirm that accessing the last .5 gigs of memory on the 970 does not cause a greater performance hit than accessing the last .5 gigs on the 980. In what sense is the last .5 gigs on the 970 crap then?

They didn't confirm that. What they confirm is that on average over an unknown period of time under unknown conditions they have a walk around that mitigates the bad performance of the last 0.5GB.
They didn't confirm it doesn't cause stutter.
They didn't confirm it doesn't cause severe performance degradation in other circumstances.
They didn't say what the actual performance of the last 0.5GB is.
All they said was that under the specific conditions they decided to test the card, there was little impact on average frame rate.
According to the tests done by others, the last 0.5GB are ~8 times slower than the rest of the memory. http://cdn.overclock.net/7/78/78ab3216_4vciohfw.png

Not a hardware issue IMO. RROD was a hardware issue.

It's not a matter of opinion. Nvidia admitted that they segmentened the memory. It's part of the HW design.
 

JaseC

gave away the keys to the kingdom.
Nvidia admitted to sectioning off the RAM, but they did not admit to the 0.5GB having 90% reduced bandwidth. That is kind of important. As for the benchmarks that demonstrate it, it seems that certain 970's are showing cutoffs at 3GB and even 980's with cutoffs at 3.5GB. So the way this 90% reduction is being 'demonstrated' is still suspect.

I've not seen a 980 benchmark that has more than the last two (or maybe three) chunks exhibit a bandwidth drop, which is explained by running said benchmark while the GPU is rendering Windows. (I get a bandwidth drop to 4GBps on my 670s when the last ~400MB is tested, for example.) I have seen 970s show a bandwidth drop at the ~3.1GB mark, though, which I agree is rather odd.
 

dr_rus

Member
Nvidia shooting their own foot, etc.

What the fuck were they thinking?

Well, I guess they were thinking something in line of "let's sell cut down GM204 cards to these guys who can't buy a 980 for a lot less money while giving them a very good performance"?

So as I've been saying in the other thread:

a. 970 is a 4 GB card with driver trying to avoid allocation to a slow 0.5 GBs when possible.

b. If such allocation is unavoidable the performance drop seems to be in line with the same drop on a 980. This is a key thing here really as it shows that these 0.5 GBs doesn't make any difference in real games.

This "feature" is already shown in all 970 benchmarks and I don't see why it suddenly such a big issue now - your 970 is still performing exactly as it did when you bought it. You bought it after reading benchmarks - and these benchmarks were made on the exact same hardware as you've got, with the same memory allocation issue.

"Future proofing" isn't something that is ruined by having 0.5 out of 4 GBs of memory running slow.

Now we need to have benchmarks not from Nvidia to further investigate the impact of this issue on the real world games.
 

Reallink

Member
If nvidia's claim that it's within a few percent of the expected performance (in both FPS and frametimes) in games then I think that benchmark is meaningless. I feel like we're wielding a black box without really understanding it.

The point of contention is what Nvidia may be doing at the driver level to minimize and marginalize the issue in their cherry picked examples, what happens with unoptimized examples, what happens with future titles that are happy to gulp a full 4GB+, and what happens when they EOL the part and stop giving it special attention.
 

Fedelias

Member
What response can we expect out of this? If Nvidia blatantly lied, then they should be held accountable for it. I haven't had any problems with my 970, but 3.5 GB is not looking good for the future.
 

Orayn

Member
So what should people get instead? You're acting like a 980 or 290x will suddenly be relevant much longer than the 970 which is silly. These cards will all reach obsolescence at the same time because there's only a maximum of 10% separating their performance from each other. Considering the 980 is 200 bucks more for a gain of 10%, the 970 is a hell of a deal and I'd be hard convinced to say it isn't the best performance per dollar card on the market at the moment. These revelations we're discussing in this thread do not change that.

I don't have anything else in mind, which is why I'm so frustrated by this. The best option in that price range turns out to have a severe, undisclosed, intentional design problem that is likely to occur in the usage cases that inspired plenty of people to get the card in the first place and will only become more common over time.

At this point I'm hoping for a revised GTX 970, because there's no way I'm buying the current version now that the cat's out of the bag.
 
The point of contention is what Nvidia may be doing at the driver level to minimize and marginalize the issue in their cherry picked examples, what happens with unoptimized examples, and what happens when they EOL the part and stop giving it special attention.

I agree. I hope a tech site does their own investigation to the behaviour of 970s (and 980s for comparison) when below and above 3.5GB of memory usage. If it really does stutter and become janky at high memory usage that's kind of some bullshit from nvidia.

Maybe I'll wait for the new AMD vid cards at this rate...
 

JaseC

gave away the keys to the kingdom.
What response can we expect out of this? If Nvidia blatantly lied, then they should be held accountable for it. I haven't had any problems with my 970, but 3.5 GB is not looking good for the future.

Nvidia did a very similar thing with the 660 Ti:

The best case scenario is always going to be that the entire 192bit bus is in use by interleaving a memory operation across all 3 controllers, giving the card 144GB/sec of memory bandwidth (192bit * 6GHz / 8). But that can only be done at up to 1.5GB of memory; the final 512MB of memory is attached to a single memory controller. This invokes the worst case scenario, where only 1 64-bit memory controller is in use and thereby reducing memory bandwidth to a much more modest 48GB/sec.

If more extensive testing shows that the issue doesn't affect real-world performance as Nvidia so claims then I'd expect much the same thing to happen: nothing.
 

Linkup

Member
If they put 4GB GDDR5 at whatever speed and bandwidth on the box then of course it's an issue if people can show that those specs where false. Then again if the bandwidth matches then shouldn't this have been noticed before it even launched? AMD it's time to release 380/390 benchmarks.
 

Demon Ice

Banned
Isn't there a similar architecture on the GTX 660 Ti? It's got 2 GB onboard but performs poorly if usage goes above 1.5 GB.
 

Bricky

Member
After reading their response and a few related articles from tech websites it seems like a non-issue in practice, but I understand why it bothers people and Nvidia should not have been shady about this.

Most posts in this thread are overreactions though, and I say this as a 970 owner. Unless we get tests that completely contradict Nvida's statements on the matter there is no reason to panic or sell your card. Might be smart to wait another week or two and see if this story develops any further if you were planning on buying one though, better safe than sorry.
 
..in the mean time I'll go on playing Battlefield 4 @4k ultra settings with 4xAA at 50-60fps on my 970SLI using up to 3.75GB of VRAM.

Honestly...bunch of drama queens.
 

LilJoka

Member
After reading their response and a few related articles from tech websites it seems like a non-issue in practice, but I understand why it bothers people and Nvidia should not have been shady about this.

Most posts in this thread are overreactions though, and I say this as a 970 owner. Unless we get tests that completely contradict Nvida's statements on the matter there is no reason to panic or sell your card. Might be smart to wait another week or two and see if this story develops any further if you were planning on buying one though, better safe than sorry.

I think the problem is at 1080p the driver can't use more than 3500mb with any current game on the market. Meaning we are having to force the GPU into allocating the last 500mb by using silly settings like 2800p Ultra where overall performance is so compromised the effect cannot be seen clearly.

In future games that could be totally different and cause lots of frame time issues trying to get 1080p60 ultra textures.

..in the mean time I'll go on playing Battlefield 4 @4k ultra settings with 4xAA at 50-60fps on my 970SLI using up to 3.75GB of VRAM.

Honestly...bunch of drama queens.

Some frame time graphs would be much appreciated. Windows basic theme in background would be good then we know closely where the game memory is allocated.
 

Zane

Member
I don't have anything else in mind, which is why I'm so frustrated by this. The best option in that price range turns out to have a severe, undisclosed, intentional design problem that is likely to occur in the usage cases that inspired plenty of people to get the card in the first place and will only become more common over time.

At this point I'm hoping for a revised GTX 970, because there's no way I'm buying the current version now that the cat's out of the bag.

I don't think this particular 4GB card is going to run into problems sooner than any other 4GB card. And we really have no idea how severe this problem is or how likely to occur it is. In fact all the data we have points to it performing as expected.
 
Top Bottom