• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

HU: Are 8 Core CPUs More Future-Proof?

PaintTinJr

Member
U got some weird usage in horizon tho. my 9900k does this.


5f72591c288f6253e079e71844046cf3.jpg


Now this is ultra settings at 1080p, didn't test with lower settings as i was cpu bottlenecked here, so not really pushing it, but again cpu utilization seems perfectly fine, and with everything above the 75% taxation, its safe to say it does use more then 6 cores and 12 threads. Unless the software is wonky or my background tasks are consuming more performance neeeded if 6/12 cores is the limit the engine can use. Which again pushes me towards my point of 8 cores being useful over 6 cores even in games designed for 6 cores only.
It looks like an interesting comparison IMO, because - going by technpowerup info - you have smaller caches - L2 2MB(256k per core) vs 4M and L3 16M vs 32M - yet have those caches running 20% faster, so you could possibly be cache size bound, which would maybe explain higher utilisation.

Memory speeds and memory channel count might also change things, although I did notice Md Ray Md Ray machine looks to be using an SSD software raid, which might be causing interrupts - that disrupt CPU utilisation.

The other week I got my first look at my nephews 10th gen 6-core i5 (K version) RTX 2060 PC, and was surprised how just freeing explorer to have 1 thread per window, disabling hibernation mode, disabling the recycle bin on his nvme, relocating the swap file on his HDD, and changing his bios to detect CPU temp for case fans - instead of the MB - made reasonable gains on games. So as you say, your OS configurations might be altering utilisation too.
 

Md Ray

Member
Not a bad rule but it was completely not necessary the last generation. The PS4 and Xbox both had 8 core cpus but they were easily smoked by quad cores on pc. It's about more than just core count.
Because those 8 cores were simply too weak. Each CPU core was even weaker than Intel's Core 2 Duo and 8 cores combined (of the PS4) was about as fast as an Intel Core 2 Quad Q6600 from 2008, that's how terrible they were. Console Zen 2 on the other hand is nowhere near that behind when compared to PC counterparts.
 
Last edited:

OmegaSupreme

advanced basic bitch
Because those 8 cores were simply too weak. Each CPU core was even weaker than Intel's Core 2 Duo and 8 cores combined (of the PS4) was about as fast as an Intel Core 2 Quad Q6600 from 2008, that's how terrible they were. Zen 2 on the other hand is nowhere that behind when compared to PC counterparts.
I know. That's why I said it's bout more than just core count. Those ps4,xbox cpus sucked ass even for the time. They really cheaped out there.
 

Md Ray

Member
It looks like an interesting comparison IMO, because - going by technpowerup info - you have smaller caches - L2 2MB(256k per core) vs 4M and L3 16M vs 32M - yet have those caches running 20% faster, so you could possibly be cache size bound, which would maybe explain higher utilisation.

Memory speeds and memory channel count might also change things, although I did notice Md Ray Md Ray machine looks to be using an SSD software raid, which might be causing interrupts - that disrupt CPU utilisation.

The other week I got my first look at my nephews 10th gen 6-core i5 (K version) RTX 2060 PC, and was surprised how just freeing explorer to have 1 thread per window, disabling hibernation mode, disabling the recycle bin on his nvme, relocating the swap file on his HDD, and changing his bios to detect CPU temp for case fans - instead of the MB - made reasonable gains on games. So as you say, your OS configurations might be altering utilisation too.
Nah, I haven't installed any RAID driver when I built my PC. I've just populated all the SATA ports, plus a USB port with HDDs for storage, backup purposes.

Not every game behaves like HZD and Days Gone, mind. DOOM Eternal is optimized for 8C Zen 2 and here's how it uses the CPU, as you can see it spreads the workload almost equally across all the cores and threads as it should:
VdOkems.png

EcvagTcUwAA2NkO


Here's HZD below for comparison's sake:

LzBiP3T.png


Bonus: ROTTR DX11 vs DX12:

EcvagTbVAAAq74u
 

PaintTinJr

Member
Nah, I haven't installed any RAID driver when I built my PC. I've just populated all the SATA ports, plus a USB port with HDDs for storage, backup purposes.

Not every game behaves like HZD and Days Gone, mind. DOOM Eternal is optimized for 8C Zen 2 and here's how it uses the CPU, as you can see it spreads the workload almost equally across all the cores and threads as it should:
VdOkems.png

EcvagTcUwAA2NkO


Here's HZD below for comparison's sake:

LzBiP3T.png


Bonus: ROTTR DX11 vs DX12:

EcvagTbVAAAq74u
Yeah I should have looked more closely, it isn't two drives as a software raid, but 1 ssd as 2 disks
 

Armorian

Banned
Was the scene using all the cores and threads to 100%? If not, then those frame-rate dips can also occur due to underutilization of the CPU cores/threads, giving you the impression that the game has fully tapped the CPU when, in fact, it has not.

In Horizon Zero Dawn, Meridian is the section where it puts the heaviest burden on the CPU - here no matter what it's not possible/easy to get a locked 120fps even at 720p w/ a 3700X - it always hovers around 90-100fps. You may think the 3700X is not up to snuff, and I thought that too until I saw the utilization of individual cores/threads then I realized those dips are simply a case of underutilization of the hardware. The game's coded to utilize a certain amount of threads and that's what it will use.

Individual cores/threads usage in Meridian:
-90-100fps (with GPU utilization sitting below 50%)
-unlocked fps
-50% of 720p


You can see just how many threads are sitting empty with zero work in them and the overall CPU usage is hardly 50% here. Had they coded the game to use more of the 8C/16T CPU, it would have increased the GPU utilization thus increasing the fps.

Remember Zen 2 on consoles is 4x faster than Jaguar CPU according to Microsoft, so a 3700X should be perfectly capable of delivering at least 4x the perf of base PS4 (30fps -> 120fps), but it's only doing a little over 3x of PS4's Jaguar CPU (and btw it's a locked 30fps in Meridian on PS4).

Sure, CPUs like 11700K, 5800X, and even 5600X might reach 120fps here and that'd simply be due to their faster IPC and other architectural improvements brute-forcing their way to reach those fps in an unoptimized code. Looking at the 3700X usage above you can tell there's potential there that is not being fully tapped and that's how the utilization looks like in many games today that are designed with Jaguar CPUs in mind.

EDIT: here's another example using Days Gone where it's averaging under 120fps with plenty of unutilized cores/threads:


What do you mean by "won't"? Why can't a 3700X be able to keep you through then gen when games are going to be designed for Zen 2 due to consoles?

Most games don't use more than 8 threads and that's why single core performance is way more important than huge number of cores.

Jedi FO is in fact 4 thread game IIRC, it sits mostly on single core performance.

Depending on the title to CPU combination really Cyberpunk ran like shit on my 2700X and that's 8c 16t.

This CPU is just weak and CP wrecks CPUs with RT. Game can use many threads but there is no improvment above 8 I belive.

If you aren't testing RT in 2021 then you aren't testing jack. Once you take that into account the landscape shifts very quickly. Outlets like HWU unfortunately prefer to be wilfully ignorant of best testing methods.
Only the german sites seem to understand how to do it atm:

PS: Never quote GameGPU. 99% of their results are "estimated" aka FAKE.

They have to test at least some configurations. Tell me from where you know they estimate the results?

There will be PC ports of this gens consoles when crossgen is over, and this gens consoles are using 8 cores.

Yes, but CPU itself is much weaker than Zen2 from desktop and devs have 7 cores and 14 threads available.

Pretty sure even the new consoles still only give devs access to 6 cores / 12 threads max for games, so a 6 core / 12 thread Zen2 CPU (Ryzen 5 3600X) should be fine for meeting minimum required spec for playing next gen games on PC. I'd assume an 8 core / 16 thread will be recommended tho, and will probably play a bit better.

7/14.

Depends on how the game engine is written. The more threaded it is the more you gain from additional cores.

Most games ends scaling above 8 threads. There are exceptions (and there will be more in the future of course) like SotTR.

Why are the Zen 3's slaughtering the 10XXX and 11XXX's so badly here? Typically they're near margin of error territory, not a murder scene.

Most CPU outlets only test handful of games over and over again, this site tests almost all new games:


Because in a test all about cores they picked a game that doesn't care much about cores.



Also probably no optimization for the cpu's.

Most games behave like this. You need stronger cores more than more cores for games mostly. And this won't change anytime soon.
 

CuNi

Member
Most games don't use more than 8 threads and that's why single core performance is way more important than huge number of cores.

Jedi FO is in fact 4 thread game IIRC, it sits mostly on single core performance.



This CPU is just weak and CP wrecks CPUs with RT. Game can use many threads but there is no improvment above 8 I belive.



They have to test at least some configurations. Tell me from where you know they estimate the results?



Yes, but CPU itself is much weaker than Zen2 from desktop and devs have 7 cores and 14 threads available.



7/14.



Most games ends scaling above 8 threads. There are exceptions (and there will be more in the future of course) like SotTR.



Most CPU outlets only test handful of games over and over again, this site tests almost all new games:




Most games behave like this. You need stronger cores more than more cores for games mostly. And this won't change anytime soon.

Yes and no. I never got the logic. While saying "game uses X cores so you also only need X cores" is wrong. If a game needs 8 cores, it's smarter to get more cores than 8 faster cores, obviously as long as the difference isn't like 1ghz or something ridiculous.

People forget that pc gamers don't just have the game running. You got discord, steam, maybe eben other launchers, chrome/Firefox etc. The game performs better if it can utilize 8 cores and offload the other Programms onto other cores instead of all the programs competing for the same cores that just run faster. CPU speeds were mostly important when we were still talking about games running on 1 thread alone because then yes you'd notice a noticeable difference. In today's time where games regularly already use more cores and it starts to be even more spread out, more cores is better than one faster core.
 

Armorian

Banned
Yes and no. I never got the logic. While saying "game uses X cores so you also only need X cores" is wrong. If a game needs 8 cores, it's smarter to get more cores than 8 faster cores, obviously as long as the difference isn't like 1ghz or something ridiculous.

People forget that pc gamers don't just have the game running. You got discord, steam, maybe eben other launchers, chrome/Firefox etc. The game performs better if it can utilize 8 cores and offload the other Programms onto other cores instead of all the programs competing for the same cores that just run faster. CPU speeds were mostly important when we were still talking about games running on 1 thread alone because then yes you'd notice a noticeable difference. In today's time where games regularly already use more cores and it starts to be even more spread out, more cores is better than one faster core.

You have the results showing that anything more than 6/12 in not needed now, when it will be needed Zen 5 will destroy your Zen 2/3 CPU anyway in every category. More cores for programs in the background make sens but in reality anything hogging CPU much will affect CPU heavy games, Windows is not good at setting programs to specific threads and there will be conflicts. I don't use anything other than game launcher (Steam etc.) and maybe spotify, Windows isn't even close to use as many resources as console OS, without something like chrome in the background it barely do anything. But yeah if you are streaming or some shit more cores come to the rescue. But games itself don't need them.
 

CuNi

Member
You have the results showing that anything more than 6/12 in not needed now, when it will be needed Zen 5 will destroy your Zen 2/3 CPU anyway in every category. More cores for programs in the background make sens but in reality anything hogging CPU much will affect CPU heavy games, Windows is not good at setting programs to specific threads and there will be conflicts. I don't use anything other than game launcher (Steam etc.) and maybe spotify, Windows isn't even close to use as many resources as console OS, without something like chrome in the background it barely do anything. But yeah if you are streaming or some shit more cores come to the rescue. But games itself don't need them.

The issue is, the CPUs with more cores usually are on better silicon and thus even run faster. If you don't OC, the 5950 has the highest official boost clock so even with your assumption that u need faster cores than more cores, the 5950 still boosts 100Mhz higher than the 5900 for example, which also boosts higher than a 5600 and equal to a 5700.
 

PaintTinJr

Member
The issue is, the CPUs with more cores usually are on better silicon and thus even run faster. If you don't OC, the 5950 has the highest official boost clock so even with your assumption that u need faster cores than more cores, the 5950 still boosts 100Mhz higher than the 5900 for example, which also boosts higher than a 5600 and equal to a 5700.
The other thing also about brawny CPUs with more cores, is that - with Intel definitely - the cache size for the L3/LLC is either the same or proportionally bigger as you go up the core-count/price-scale, so in lower core utilisation workloads the shared L3 cache gives proportionally more effective memory bandwidth per core in like for like configs, where only the CPU has been changed.

I also suspect on the basis of this, if someone disabled cores - via the bios - for the higher end core-count chip, to give parity, and then overclocked, the better silicon, as you mention, would actually clock (the lower core count) cores higher.
 
Last edited:

Dream-Knife

Banned
Yes and no. I never got the logic. While saying "game uses X cores so you also only need X cores" is wrong. If a game needs 8 cores, it's smarter to get more cores than 8 faster cores, obviously as long as the difference isn't like 1ghz or something ridiculous.

People forget that pc gamers don't just have the game running. You got discord, steam, maybe eben other launchers, chrome/Firefox etc. The game performs better if it can utilize 8 cores and offload the other Programms onto other cores instead of all the programs competing for the same cores that just run faster. CPU speeds were mostly important when we were still talking about games running on 1 thread alone because then yes you'd notice a noticeable difference. In today's time where games regularly already use more cores and it starts to be even more spread out, more cores is better than one faster core.
People greatly overestimate how much that uses. I have Steam, discord, Vscode, Brave with 5+ tabs, wsl2, and discord running and I'm using 1-5% cpu and 9.3gb of RAM.
The issue is, the CPUs with more cores usually are on better silicon and thus even run faster. If you don't OC, the 5950 has the highest official boost clock so even with your assumption that u need faster cores than more cores, the 5950 still boosts 100Mhz higher than the 5900 for example, which also boosts higher than a 5600 and equal to a 5700.
Aren't most lower core chips the same as their higher core chips just with cores disabled to increase yield?
 

CuNi

Member
People greatly overestimate how much that uses. I have Steam, discord, Vscode, Brave with 5+ tabs, wsl2, and discord running and I'm using 1-5% cpu and 9.3gb of RAM.

Aren't most lower core chips the same as their higher core chips just with cores disabled to increase yield?

I have 4 chrome windows open with each like 50 tabs. But I know I'm a edge case so won't bring this in as a argument.

Yes cores are disabled for yield, but the exact same reason why they have to disable cores also means the whole chip isn't perfect and that's why lower core CPUs usually also have lower frequencys.

Also there is a difference in chip between high end and low/mid end, at least on AMD. I haven't checked Intel yet. AMDs 5900 and 5950 has 2 CCDs so 2x 32MB L3 cache, while 5800 and lower only has 1 CCD which means also only 1 32MB L3 cache.

There is a lot more going on under the hood than just disabling some cores and/or having more cores. Architecture needs to account for that.
 

Hezekiah

Banned
If I was buying a CPU I would buy an eight code, but I have an 8700k - 6 cores, 12 threads. Do we think this should be good for most of the generation right?
 

Kenpachii

Member
How is your memory so high?

10200 = memory clock on gpu, stock is 9500 i believe. GDDR6x

3200mhz on the cpu front = 12gb used. i got 32gb, could be background processes, or simple windows reserving more memory space. which does happen when u put more memory into your PC for faster operations when its available.
 
Last edited:

RoadHazard

Gold Member
Console games don't get to use all the cores for game code, there's an OS running in the background etc. Although I guess that's true on a gaming PC too.
 

PaintTinJr

Member
I know. That's why I said it's bout more than just core count. Those ps4,xbox cpus sucked ass even for the time. They really cheaped out there.
I whole heartedly agree, they weren't what we wanted as gamers following the Cell Be or Xenon, but when you consider the situation, and where we are now...

The PS5 (at least) has a full 8 core brawny CPU for game code, with audio and I/O offloaded onto the Tempest Engine and the IO complex. An IO solution with its own ESRAM so it can DMA directly to unified memory - much like Cell BE SPUs could work as satellite processors, un-salved from the primary PPU controlling core - and the CPUs new brawny cores can also support heavy use of 256bit AVX2 instructions AFAIK - for CPU Ray tracing - going by Cerny's comments in the Road to PS5, but this was only possible because PlayStation and Xbox switched lanes with their processor designs in moving to APUs, and took the 7 year generation hit of Jaguar cores - because Intel weren't competitive on pricing, POWERPC wasn't guaranteed to survive as a mainline architecture, and Jaguar cores were all that was technically available for APUs at that time.

I presume the choice of an APU was driven by three factors: Firstly, memory prices were no longer falling as they had - with age - so having one unified memory pool reduced potential hardware costs for the whole generation, at day one. Two, games have lots of redundant data using discrete memory pools for CPU and GPU - that is very wasteful in processing latency, wasteful in northbridge bandwidth, and very wasteful in memory use which can all be eliminated with unified memory in a APU solution, and gain performance too. Thirdly, the Cell BE and Xenon in the PS360 gen were in effect used as single core, two way general purpose CPUs for game logic on their primary PPU - the 360 had two additional PPUs, but in hindsight the tasks those additional PPUs did could(and is now) be done more efficiently with an ASICs, and because using those PPUs for game features used more of the x360 memory bandwidth, which was likely better used by the GPU, given its tiny EDRAM that made it difficult for it to match the (720p)HD ready resolution of that console generation and already strained the unified memory.

So, IMHO spending more money wasn't an option to reach where we are now - where a 12 core CPU and/or RTX IO board with 6 core might be needed by the end of the generation to match the PS5's CPU use.
 
Last edited:

Marlenus

Member
While HUB are correct that you only need a set amount of performance for gaming there are going to be some cases where slack devs expect 14 threads and even if your 12 threads are fast enough the lack of 2 threads is going to hurt performance.

I also feel that it does somewhat miss an element. For me I intend to get a a 5800X 3D if AMD make that sku available because I expect that with a B550 motherboard will be cheaper than 6c zen 4 + B650 + DDR5 and I don't think the performance delta will be worth the cost.
 

Md Ray

Member
While HUB are correct that you only need a set amount of performance for gaming there are going to be some cases where slack devs expect 14 threads and even if your 12 threads are fast enough the lack of 2 threads is going to hurt performance.
Exactly. In such cases, there's a chance your 1% and 0.1% low min fps can suffer even if the avg. is good enough.
 
Top Bottom