• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GeForce GTX 970s seem to have an issue using all 4GB of VRAM, Nvidia looking into it

Status
Not open for further replies.
I posted this elsewhere, but I think it actually seems more likely that the issue is hardware-related and cannot be fixed. Here's an illustration of GM204 (the chip inside the 970 and the 980)

Three of those sixteen SMMs are cut/disabled to make a 970 whereas the 980 gets all sixteen fully enabled. It seems that each of the four 64-bit memory controllers corresponds with each of the four raster engines and in the same way that the 970's effective pixel fillrate has been demonstrated to be lower than the 980's even though SMM cutting leaves the ROPs fully intact (http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980) the same situation may apply to bandwidth with Maxwell. However, the issue may be completely independent of which SMMs are cut and may simply relate to how many.

GM206's block diagram demonstrates the same raster engine to memory controller ratio/physical proximity:

I expect a cut-down GM206 part and even a GM200 part will exhibit the same issue as a result, it might be intrinsically tied to how Maxwell as an architecture operates. Cut down SMMs -> effectively mess up ROP and memory controller behavior as well as shaders and TMUs. I also don't think there's a chance in hell Nvidia were unaware of this, but I could be wrong.

I can't comment on the technical stuff you explained, but if it's true and Nvidia knew about this and they deliberately withheld this info in their communications and marketing then that borders on false advertising and feel scummy to me.
 

Spineker

Banned
So what can we realistically expect for the false advertisement of 4GB? Refund? Exchange for 980s?

A refund at the very least. Just re equate the value of the card to make note of the actual power of it and refund the difference.
 
D

Deleted member 17706

Unconfirmed Member
You might not be playing with Ultra textures. Just turning on the setting does nothing unless you also download with HD texture pack, which has to be done manually.

Yeah, I made sure to download the pack.
 
Holy crap, crazy timing for me, I was this close to pulling the trigger on a 970 today. Holding off for a bit to see what happens with this...
 
I don't know about that Nai Benchmark. I can't see the source code, and, thus, I don't trust it.

On the other hand, frame times and frame rate on AC: Unity are pretty stable at 8x MSAA with 3.8 GB of VRAM usage.

QMxHywm.jpg
 

potam

Banned
Well, they still haven't fixed the SLI voltage issues that have been apparent since launch, so I'm not holding my breath. I remember when I first heard about this, it was only a handful of users actually being affected. Seems like it may be those who have Hynix memory on their cards.
 

bootski

Member
o boy that really sucks. although, i'm sure nvidia won't leave 970 owners in the lurch if it turns out to be an unfixable issue so i'm tempted to actually buy one and leave it boxed up tomorrow just in case the refund/compensation is actually worth it. i've been waiting for them to come down in price before i pulled the trigger but given the state of the CDN economy i don't think that's gonna happen anytime soon. worst case scenario, i have 30 days to return an unopened product from my hardware store.

the only issue i have is how i'm gonna explain to the guy i just helped build a machine with a strix oc 970 that his video card has potentially unfixable hardware issues.
 
Well, they still haven't fixed the SLI voltage issues that have been apparent since launch, so I'm not holding my breath. I remember when I first heard about this, it was only a handful of users actually being affected. Seems like it may be those who have Hynix memory on their cards.

Ah that may be possible. My memory manufacturer is Samsung , and i don't seem to be affected by it in games. In which case, I'm thinking the blame would lie with Hynix.
 

XBP

Member
Shadow of Mordor with ultra textures (irrespective of resolution) uses max ~3600 MB for me. The max I've seen my card use is 3750MB when playing titanfall at 4k.
 

JaseC

gave away the keys to the kingdom.
If it is a hardware issue then I imagine Nvidia will allow 970 owners to upgrade to a 980 free of charge. The only other alternative would be some sort of refund program and that'd leave Nvidia even more out of pocket.
 
Shadow of Mordor with ultra textures (irrespective of resolution) uses max ~3600 MB for me. The max I've seen my card use is 3750MB when playing titanfall at 4k.

How is your performance with titanfall at 4k and what is your VRAM manufacturer?

If it is a hardware issue then I imagine Nvidia will allow 970 owners to upgrade to a 980 free of charge. The only other alternative would be some sort of refund program and that'd leave Nvidia even more out of pocket.

Not if the hw problem is related to Hynix memory and not a fault in the design itself. At that point, it's not nvidia's fault. It's hynix's and the vendors that chose to use hynix.
 
You might want to add a note in the first post that this may only be affecting 970 users with Hynix memory as well as instructions for people to check to see who is their VRAM manufacturer.

Lots of manufacturers started going with Hynix after the initial batch of cards. Pretty shady shit. Samsung memory overclocks much better too.

To check what kind of memory you have install Nvidia Inspector v 1.9.7.3

I added these posts to the OP. GPU-Z also tells you what make the memory on your card is.
 

JaseC

gave away the keys to the kingdom.
Not if the hw problem is related to Hynix memory and not a fault in the design itself. At that point, it's not nvidia's fault. It's hynix's and the vendors that chose to use hynix.

True, but I would think if the issue could be tracked back to a specific brand of memory modules then the problem would manifest at random points, not specifically at ~3.5GB and higher.
 
True, but I would think if the issue could be tracked back to a specific brand of memory modules then the problem would manifest at random points, not specifically at ~3.5GB and higher.

Well, thing is I have the 970 and performance doesn't tank at 3.5 GB. In fact, I can reach 3.9 GB, and my card is still fine. Also, if a bad batch was made by Hynix, it would likely be repeated across every card in that batch.
 
I have a Gigabyte G1 970 (rev 1.0), don't know which memory yet, I'll check when I come home from work.
Really happy with the card and haven't experienced any bottlenecks.
Then again, I'm gaming on 1080p (TV) and 1200p (Dell 24" monitor) so I'm not sure if I can even push it to 4GB usage.
The only issue I experienced is with AC:U (the occacionaly 4-5 second freeze, etc.) but that game itself is fucked so I don't think it's the GPU. :/
Really interested how this plays out.
 

Faith

Member
I have a 970 SLI system and the only game that doesn't perform good is Far Cry 4. I can play Tomb Raider with 4xSSAA, Metro with 2.25xSSAA with solid 60fps.
 

JaseC

gave away the keys to the kingdom.
I have Samsung memory on mine so I guess I'm in the clear. But I hope this gets sorted out for those who are having problems.

Run the RAM benchmark seen in the OP (you may also need this).

Also, if a bad batch was made by Hynix, it would likely be repeated across every card in that batch.

Yeah, but for all we know there are separate bins for the 970 and 980 given they're clocked differently (unlikely but a possibility all the same).
 

Coldsun

Banned
I have a Gigabyte G1 970 (rev 1.0), don't know which memory yet, I'll check when I come home from work.
Really happy with the card and haven't experienced any bottlenecks.
Then again, I'm gaming on 1080p (TV) and 1200p (Dell 24" monitor) so I'm not sure if I can even push it to 4GB usage.
The only issue I experienced is with AC:U (the occacionaly 4-5 second freeze, etc.) but that game itself is fucked so I don't think it's the GPU. :/
Really interested how this plays out.

Should be samgsung memory ;)
 

cheezcake

Member
Mine has Hynix memory, I haven't had the issue where the card goes past 3.5GB of VRAM and performance slows to a crawl. But it definitely doesn't like to use more than 3.5GB, it only goes past if I force AA or supersampling up a lot.
 

JaseC

gave away the keys to the kingdom.
Mine has Hynix memory, I haven't had the issue where the card goes past 3.5GB of VRAM and performance slows to a crawl. But it definitely doesn't like to use more than 3.5GB, it only goes past if I force AA or supersampling up a lot.

See my post above and report your stats.
 

XBP

Member
How is your performance with titanfall at 4k and what is your VRAM manufacturer?

Just did a quick test. Everything maxed but ambient occlusion disabled. 4k and I get 45-60 frames throughout the game including heavy battles (some frame drops to 30-40).

Maximum VRAM usage spiked to 3718MB for a few seconds but stabilized throughout the game at 3695MB.

qazfc8k.png


Enabling AO drops the frame rate to 10-20 FPS with no change in VRAM usage.
 
Just did a quick test. Everything maxed but ambient occlusion disabled. 4k and I get 45-60 frames throughout the game including heavy battles (some frame drops to 30-40).

Maximum VRAM usage spiked to 3718MB for a few seconds but stabilized throughout the game at 3695MB.

qazfc8k.png


Enabling AO drops the frame rate to 10-20 FPS with no change in VRAM usage.

That would suggest it's actually not having problems allocating the memory in gameplay scenarios. You also have Hynix memory instead of Samsung suggesting Hynix memory is not having any real problems. What version of Windows are you running? Nvm, you are running 8.1 just like I am.
 

JaseC

gave away the keys to the kingdom.
It'd be of more help if people could do this:


...and report back with a screenshot of their stats window, along with the make/model of their GPU and the brand of memory it uses (use GPU-Z for the latter).

Edit: It's telling me that the last ~400MB of whichever one of the 670s it's testing is 4GBps, which I find rather odd.

Edit edit: Actually, I forgot to account for OS overhead, so the sudden and massive drop makes sense.
 
It'd be of more help if people could do this:



...and report back with a screenshot of their stats window, along with the make/model of their GPU and the brand of memory it uses (use GPU-Z for the latter).

Edit: It's telling me that the last ~400MB of whichever one of the 670s it's testing is 4GBps, which I find rather odd.


Gigabyte G1, purchased within 30 minutes of going up for sale on Amazon.com the day 970's got announced. Samsung mem.
 
Status
Not open for further replies.
Top Bottom