• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD's Ryzen 3000 Review Thread: More Cores, More Threads.

llien

Member
AMD kicking ass at lower clocks and lower power consumption. This reminds me of... something.

Good nes aside, folks, what do zou think about going with more than 8 cores, if I don't do CPU heavy stuff (well, some IDE/compiling, of course, but I'm patient enough for it to finish even on a dual core notebook)

So these are half the dye size of the 9900k right and still slower for gaming?
Yeah. And it costs the same, if you add $150 to the price of AMD CPU.
At 1440p perf difference is within 2%.

Even my grandma can spot it, even without glasses.
 
Last edited:

Kenpachii

Member
who the fuck benches a cpu on 1440p. that's the most idiotic shit i ever heard.

U bench a cpu on 720p to not get GPU bottlenecked. Dumb as hell. benching for dummies.

Why not bench those cpu's at 8k on ultra while you are at it. I bet they all going to run metro at 10 fps even on a got dam 10+ year old cpu at 2ghz.

Now i need to see ryzen performance with cemu. If it's going to come close towards 9700k i will probably bite on it.
 

llien

Member
U bench a cpu on 720p to not get GPU bottlenecked. Dumb as hell. benching for dummies.

What year is this, seriously?

720p benching was supposed to reflect alleged to "bottlneck on CPU". The idea was, that if CPU performs better at ridiculously low resolutions, it means it would rock in the future games, that will be more CPU demanding.

Now, please explain why new games ended up running BETTER on Ryzen CPUs, contrary to said expectations?
Obviously, the "720p futureproofing" is BS.


Actual game benchmarks at actual resolution shows you what you'd get in realistic scenarios.
Silly is shelling out $150+ for ethereal "future proof" performance. Money better spent on GPUs right here, right now.
 
Last edited:

Kenpachii

Member
What year is this, seriously?

720p benching was supposed to reflect alleged to "bottlneck on CPU". The idea was, that if CPU performs better at ridiculously low resolutions, it means it would rock in the future games, that will be more CPU demanding.

Now, please explain why new games ended up running BETTER on Ryzen CPUs, contrary to said expectations?
Obviously, the "720p futureproofing" is BS.


Actual game benchmarks at actual resolution shows you what you'd get in realistic scenarios.
Silly is shelling out $150+ for ethereal "future proof" performance. Money better spent on GPUs right here, right now.

What are you even smoking

U push low resolution to get stress of your GPU so it's not going to hit the 100% usage wall that is going to effect your CPU performance.

The moment a GPU is going to hit 100% usage, fps will no longer increase even if the CPU is only getting 70% taxed. that means a CPU will reflect the samish results as the other CPU that sits at 100% taxation while which means obviously the 70% has more leeway and is faster if just the gpu was faster.

It's pointless to test this.

Actual resolutions dont show you how fast a CPU is.

How is this hard?

It's like slamming a 2080 ti in a 2ghz core 2 duo and come to the conclusion the gpu is just as fast as a 50 bucks 750 ti.

Yep. I'm good with my 8700K @4.8 GHz on all cores and there's no reason for me to question my new Z390 build:messenger_sunglasses:
I don't give a damn about video rendering and stuff like that which tends to be aimed for workstations.


Shadow of the Tomb Raider is a much better game and the best in the trilogy.

edit, flawed logic removed it.
 
Last edited:

JohnnyFootball

GerAlt-Right. Ciriously.
What are you even smoking

U push low resolution to get stress of your GPU so it's not going to hit the 100% usage wall that is going to effect your CPU performance.

The moment a GPU is going to hit 100% usage, fps will no longer increase even if the CPU is only getting 70% taxed. that means a CPU will reflect the samish results as the other CPU that sits at 100% taxation while which means obviously the 70% has more leeway and is faster if just the gpu was faster.

It's pointless to test this.

Actual resolutions dont show you how fast a CPU is.

How is this hard?



Honestly out of that benchmark the 9700 sits at 230 bucks and trumps all of those games it seems. Sure it won't be long in the legs for future titles. But honestly you save a bunch of cash with it to get worse performance. Also emulation is probably going to push a lot better performance on that 9700 anyway.

Also 8700k with a 5ghz oc will probably be cheaper to get, specially with those motherboards being probably overpriced as hell for those x570 features.

Anyway still going to wait on cemu performance on those ryzens and if more people get push more performance out of it.

It's like slamming a 2080 ti in a 2ghz core 2 duo and say it performs the same way as a 750 ti.
Where is the 9700 being sold for $230?!
 

llien

Member
U push low resolution to get stress of your GPU so it's not going to hit the 100% usage wall that is going to effect your CPU performance.

Dude, you are the one not getting it, it would help if you would calmly re-read what is being communicated to you.
Let me try it again: NOBODY plays on 720p nowadays.
The idea of testing current games at 720p, is to artificially inflate FPS, to strain CPU more, hence showing "which CPU is faster at gaming".

Now, ask yourself, faster at WHICH gaming? 720p? Nobody plays at that resolution.
It is some elusive "future" games that will need even more CPU at resolution that people actually play: 1080p,1440p, 4k.

Now, 720p benchmarks FAILED MISERABLY to predict how Zen fares vs Intel in new games, contrary to expectation, AMD CPUs do better in newer games.

Now, given that Ryzen is what will be inside Xbox Next/PS5, try to predict, how well those games will run on 6+ core AMD CPUs Iat normal resolutions,
 

LordOfChaos

Member
@ 33 sec: Ryzen 2 matching Intel on CS:Go at 1080p. Since that's what everyone likes to see


And they're doing this even with the mentioned CCX picking issues with sub-optimal core designations. So it should get even better with a few patches.

Also, how is that always an issue to be fixed? One would think it coming up every time with the last several AMD architectures would make them prone to get that fixed with Microsoft well beforehand. Maybe that's still a core disadvantage for being the little guy; Intel has little problem with getting Windows optimizations.
 

SonGoku

Member
There's absolutely ZERO security risk of leaving HT enabled. Literally no one give a fuck about you, about your PC and never will, just like for the last 10 years of so. All of this "Security" nonsense in regards to Intel's CPUs is pure bullshit and normal / regular people should just ignore it completely.
Still an unlocked backdoor just waiting to be opened, that's like using a car with a broken seatbelt and claim you drive slow so it won't be an issue anyways 🤦‍♂️
Intel knowingly sells CPUs in 2019 full of security vulnerabilities found years ago and gets white knighted. There truly is a DF for anything.
 

PhoenixTank

Member
And they're doing this even with the mentioned CCX picking issues with sub-optimal core designations. So it should get even better with a few patches.

Also, how is that always an issue to be fixed? One would think it coming up every time with the last several AMD architectures would make them prone to get that fixed with Microsoft well beforehand. Maybe that's still a core disadvantage for being the little guy; Intel has little problem with getting Windows optimizations.
You see, I thought I'd read that Win10 1903 was meant to help a lot with scheduling deficits and the CCX stuff. Not clear what's up with that, and I can't say I have noticed which build # the reviewers are using. May have only applied to the TR lineup anyway.

Edit: Bitwit & GN at least are on 1903, I'll have to go back and check.
 
Last edited:
I'm not lying when I say that I am seriously considering a 3900X as an upgrade from my 5820K. I want that 12 core lovin' for encoding videos.

I have not owned an AMD CPU since the Athlon 64 days. That is literally how long it has been since I have actually considered an AMD CPU.

if I was Intel, I would be shitting myself these days because I know I'm not the only person out there who has been Intel for more than 15 years looking at AMD seriously for the first time in what feels like a lifetime.
 
Last edited:
I'm not lying when I say that I am seriously considering a 3900X as an upgrade from my 5820K. I want that 12 core lovin' for encoding videos.

I have not owned an AMD CPU since the Athlon 64 days. That is literally how long it has been since I have actually considered an AMD CPU.

if I was Intel, I would be shitting myself these days because I know I'm not the only person out there who has been Intel for more than 15 years looking at AMD seriously for the first time in what feels like a lifetime.

Hell has frozen over. Pigs have just been seen flying over the city of Newport. Never thought I'd see the day where you're considering an AMD product :messenger_grinning_squinting:

This release is crazy.

Maybe I'll cross over and buy Nvidia. (actually I'm running a 1070 Ti FTW2, and had about 6 different Green cards. That's why you should listen to me folks. I speak from a position of insight. A true voice of reason in the darkness).
 
Last edited:
Hell has frozen over. Pigs have just been seen flying over the city of Newport. Never thought I'd see the day where you're considering an AMD product :messenger_grinning_squinting:

This release is crazy.

Maybe I'll cross over and buy Nvidia. (actually I'm running a 1070 Ti FTW2, and had about 6 different Green cards. That's why you should listen to me folks. I speak from a position of insight. A true voice of reason in the darkness).
It's not that big a deal. And don't flatter yourself either. Nvidia's GPU lead over AMD is both insurmountable and meaningful and the result is that Nvidia justifiably dominates over AMD there in the GPU arena.

Intel has been standing still for almost 5 years now, that's how long a period of time they have gifted AMD to catch up first with the absolutely noteworthy original Zen core and now this 2nd-gen refined Zen 2 core on a smaller process than what Intel can offer. Make no mistake: the situation is what it is because Intel keeps fucking up in addition to AMD making great strides. If Intel was already on 10nm by now, we would not be having this conversation at all. Intel however seems pretty fucked these days and I expect that AMD will continue to refine Zen so right now the spotlight is on AMD as it justifiably should be.
 

JohnnyFootball

GerAlt-Right. Ciriously.
It's not that big a deal. And don't flatter yourself either. Nvidia's GPU lead over AMD is both insurmountable and meaningful and the result is that Nvidia justifiably dominates over AMD there in the GPU arena.

Intel has been standing still for almost 5 years now, that's how long a period of time they have gifted AMD to catch up first with the absolutely noteworthy original Zen core and now this 2nd-gen refined Zen 2 core on a smaller process than what Intel can offer. Make no mistake: the situation is what it is because Intel keeps fucking up in addition to AMD making great strides. If Intel was already on 10nm by now, we would not be having this conversation at all. Intel however seems pretty fucked these days and I expect that AMD will continue to refine Zen so right now the spotlight is on AMD as it justifiably should be.
It's hard for any rational person to argue with this. If Intel had been able to get their 10nm CPUs off and running on schedule, the story would be very different.

But for whatever reason, it didn't happen.
 
All this being said: I feel like with the kind of core counts AMD is now pushing on the high end (12c/24t on 3900X, 16c/32t on the upcoming 3950X) they should be offering us quad-channel memory because the memory bandwidth bottleneck is getting ridiculous. Now I gather that AMD reserves quad-channel for Threadripper in much the same way Intel reserves quad-channel for HEDT (Skylake-X), to segment their markets in a meaningful fashion.

I'm thinking of waiting until September to see how the 3950X performs compared to the 3900X but also I really want to see what AMD will charge for Ryzen 3000 Threadrippers because I would really like to be on quad-channel memory with these kinds of core counts. I'm probably a bit spoiled by having had quad-channel memory paired with my 5820K all these years because back in the day you could get into HEDT for $389. It's hard to believe that's what my 5820K cost back in the day. Intel seems to have completely forgotten about price-performance though which is why you can't get HEDT at those price points anymore, a damn shame.
 

LordOfChaos

Member
Hmm looks like most of the original testing wasn't hitting Ryzens full clock speeds due to a driver error? The fix to the WHEA error and the bios make the chip boost to 4.65 while it was struggling to get to 4.6 before. So things will look even better.

 
In unrelated news, my Noctua NH-D14 keeps winning. I've been using it since I got it for my i7-950 and Noctua keeps supporting it with new mounting kits.

Gotta love Noctua, I feel like I'll be able to use the trusty D14 forever with this kind of support.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Hmm looks like most of the original testing wasn't hitting Ryzens full clock speeds due to a driver error? The fix to the WHEA error and the bios make the chip boost to 4.65 while it was struggling to get to 4.6 before. So things will look even better.


These sort of things are ALWAYS going to be an issue day 1.
 

Alexios

Cores, shaders and BIOS oh my!
Exciting times, hope by the time I do an upgrade (not any time soon, just saying) we'll be able to get a nice generation lasting beastie for ~500 or less to include motherboard, cpu and ram.
 
Last edited:

thelastword

Banned
And they're doing this even with the mentioned CCX picking issues with sub-optimal core designations. So it should get even better with a few patches.

Also, how is that always an issue to be fixed? One would think it coming up every time with the last several AMD architectures would make them prone to get that fixed with Microsoft well beforehand. Maybe that's still a core disadvantage for being the little guy; Intel has little problem with getting Windows optimizations.
I guess it's an AMD feature, there's always something to patch or a driver to update.... Thing is, many issues don't pop up or get to your attention till many different persons all over the world are benchmarking and testing and pushing your cards to the limit, there's always some oversight as well....AMD is also strained between a CPU team and GPU team......Nvidia alone has over 11,000 employees just for graphics....AMD has about 10,000 for both RTG and Ryzen.....Intel should have some Crazy number I imagine....

It's hard for any rational person to argue with this. If Intel had been able to get their 10nm CPUs off and running on schedule, the story would be very different.

But for whatever reason, it didn't happen.
The "would coulda shoulda" argument.....

If I entered Bill Gates Garage when he invited me back then, I'd probably be Co-Owner of Microsoft today, but I didn't......Ohhhhhhh…..things could have been so different....:messenger_pouting:
 
Linus says that he noticed wild framerate variations in some of the tests he did. Cursed scheduler still holding AMD back it seems.


I wonder if this is really fixable or not since the CCX/chiplets design is so fundamentally different from a traditional monolithic core design.
 
Last edited:

LordOfChaos

Member
Linus says that he noticed wild framerate variations in some of the tests he did. Cursed scheduler still holding AMD back it seems.


I wonder if this is really fixable or not since the CCX/chiplets design is so fundamentally different from a traditional monolithic core design.


I'm sure it's fixable, as very similar issues have already been addressed, and he saw an improvement with just crudely assigning cores logically in Windows task manager.





This is also why Linux can get a lot more out of Threadripper.

What I wonder about is why, fully knowing their own architectures, they haven't managed to get these rolled into Windows in advance. I think that's one remaining disadvantage of being so much smaller, we've always called it Wintel for a reason. Meanwhile the Linux kernel is much quicker to become aware of how to schedule for AMD.

Bulldozer: "Do things on distinct modules if you can"
Windows, eventually: "Oh, ok"

Ryzen 1: "Don't cross CCX's for like-tasks"
Windows, eventually: "Oh, ok"

Ryzen 3: "Don't cross CCX's for like-tasks"
Windows, eventually: "Oh, ok"

I'm not sure if the fixes in 1903 also address Ryzen 3000 fully? It seems like there's still issues, if it did fix it fully for prior families of Ryzen.
 
Last edited:

Rentahamster

Rodent Whores
While the new CPUs are a good value, how is the value aspect of the new X570 motherboards compared to the Intel equivalents?
 

Agent_4Seven

Tears of Nintendo
Still an unlocked backdoor just waiting to be opened, that's like using a car with a broken seatbelt and claim you drive slow so it won't be an issue anyways 🤦‍♂️
Intel knowingly sells CPUs in 2019 full of security vulnerabilities found years ago and gets white knighted. There truly is a DF for anything.
I'm not defending them, but as I've said already, the reality is that nobody cares about ordinary people and their PC. The best they can do now is Winlock your PC and even then if you're not stupid AF and if you've NIS (for example) on your PC, there's absolutely ZERO chance you'll be in trouble and I can tell you for a fact that it is true.
 
thought so. i really want to know who much influence the chiplet binning will have on OC potential. 3800 vs 3700 should show that quite good.
 


DerBauer is the only guy who really tried to OC these things and they are not touching their maximum advertised boost clocks much less going beyond. I'm not sure what's going on here but AMD might want to clarify exactly what realistic clock speed expectations are. The TSMC 7nm process has never been used on dies this big before, 7nm of course doubles power density over 14nm, and heat is a huge factor here. Overclocking on something which isn't liquid nitrogen is basically nonexistent, and delidding doesn't change anything there.
 
Last edited:

Kenpachii

Member
I'm sure it's fixable, as very similar issues have already been addressed, and he saw an improvement with just crudely assigning cores logically in Windows task manager.





This is also why Linux can get a lot more out of Threadripper.

What I wonder about is why, fully knowing their own architectures, they haven't managed to get these rolled into Windows in advance. I think that's one remaining disadvantage of being so much smaller, we've always called it Wintel for a reason. Meanwhile the Linux kernel is much quicker to become aware of how to schedule for AMD.

Bulldozer: "Do things on distinct modules if you can"
Windows, eventually: "Oh, ok"

Ryzen 1: "Don't cross CCX's for like-tasks"
Windows, eventually: "Oh, ok"

Ryzen 3: "Don't cross CCX's for like-tasks"
Windows, eventually: "Oh, ok"

I'm not sure if the fixes in 1903 also address Ryzen 3000 fully? It seems like there's still issues, if it did fix it fully for prior families of Ryzen.


Nobody uses those chips, that's why they don't care.

Common thing.

That's why going for niche products is never a good idea no matter what brand. Zero support will come from it. Same with SLI etc.
 

Kenpachii

Member
Because of the many cores i wonder if disabling SMT and hyperthreading for games is going to result in far more performance then all those benchmarks really push out.

Let's be honest here is there any game that uses more then 8 cores anyway?
 
Because of the many cores i wonder if disabling SMT and hyperthreading for games is going to result in far more performance then all those benchmarks really push out.

Let's be honest here is there any game that uses more then 8 cores anyway?
Assassins Creed does, it will max all 8 thread on my 7700K @5Ghz if I don't lock my fps to 60 :/ .Whether its optimisation or their DRM over DRM who knows.
 

petran79

Banned
Tomorrow is when I will decide I will keep my I4790k for a while longer, or upgrade. This CPU is 5 years old, but doesn't feel like it. Still, 3x the cores at a reasonably low price sounds too good. Still, I would need to get a new Mobo and ram as well, and maybe a new power supply just to be safe, since this one is kind of old.

If it is just for gaming, keep the 4790k and even overclock it.
 

llien

Member
Guys, beware of ASRock’s X570 Taichi board.
As toms figured, it consumes 30W more and up to 50-60w more at stress load of 3700x.
It seems to bump voltage to 1.31V (others sit at 1.2V). (ends up with 0,1Ghz higher clocks, perhaps that's why it was tuned that way)
Nothing that can't be fixed via bios update though.


Power consumption of Zen 2 is simply amazing, 8 core 3700x is cooler than 4 core 7700k, 9900k consumes nearly twice as much.

VaeZvgj.png
 
Last edited:
Because of the many cores i wonder if disabling SMT and hyperthreading for games is going to result in far more performance then all those benchmarks really push out.

Let's be honest here is there any game that uses more then 8 cores anyway?
Gamers Nexus did do tests with SMT disabled which did result in higher scores in many of the games that most heavily favored Intel. AMD still has work to do on reducing the SMT overhead, especially now that they are selling 12 and 16 core processors as mainstream CPU's. Intel has been refining their SMT for many years now to reach the point where SMT overhead is almost zero. There were several instances in the GN review where the 3900X was behind the 3700X and even the 3600 when SMT was enabled, they disabled it and suddenly performance popped up and it was close to the 9900K.
 

Kenpachii

Member
What's funny tho is that suddently with the 3000 series and with consoles next generation everything that nvidia has doesn't look much future proof anymore. Before the 3000 series 8/16 intel was something dam that's a lot.

Now 8/16 is like, well that's standard. and anything below feels a bit dated.

Gamers Nexus did do tests with SMT disabled which did result in higher scores in many of the games that most heavily favored Intel. AMD still has work to do on reducing the SMT overhead, especially now that they are selling 12 and 16 core processors as mainstream CPU's. Intel has been refining their SMT for many years now to reach the point where SMT overhead is almost zero. There were several instances in the GN review where the 3900X was behind the 3700X and even the 3600 when SMT was enabled, they disabled it and suddenly performance popped up and it was close to the 9900K.


Ah thanks for the video.

It would be interesting if they push some kind of solution forwards in future reviews to see what cores even on ryzen makes sense for people.

I just don't see much of a reason to push more threads and cores in current time window if there are zero games that make use of it.

If that 3900x gets all his threads cut off, that 12 core is still going to idle most of the time as there honestly is nothing that asks for 12 cores.

I would even go so far that AMD should provide a way to disable cores on there CPU's to make it a 9 core for example or whatever you want.

Imagine that binned chip pushed back to 8 cores with no SMT and push all the extra heat into ghz and push 5+ghz which will profit performance in every game that is out right now.

People that have a 1700 ryzen for example, can disable 4 cores and 8 threads and they will push 1ghz more frequence, i bet 90% of the people will opt for it.

When cores requirements go up, u just unlock more and more cores over time and threads.

That's exactly what i did with my 870 i7 back in 2009. 4/8 made no sense, nothing made use of it. So i locked the 8 threads out of the CPU through bios to get a 4 core one without hyperthreading to push hz higher and gain performance in any game.

I think there is a lot of gain to be made with ryzen on this front.
 
Last edited:
Guys, beware of ASRock’s X570 Taichi board.
As toms figured, it consumes 30W more and up to 50-60w more at stress load of 3700x.
It seems to bump voltage to 1.31V (others sit at 1.2V). (ends up with 0,1Ghz higher clocks, perhaps that's why it was tuned that way)
Nothing that can't be fixed via bios update though.


Power consumption of Zen 2 is simply amazing, 8 core 3700x is cooler than 4 core 7700k, 9900k consumes nearly twice as much.

VaeZvgj.png

Yeah it's interesting comparing the 8-cores of the 3700X vs the 9900K and seeing it consume so much less power. I knew the 9900K was an inefficient CPU pushed to the limit but it's still a shock.
 

Bl@de

Member
I like the 3700X. Good performance and 65W TDP. But as somebody who plays in 1440p ... I still don't see a reason to upgrade from my Xeon 1230v3. Cinebench performance is irrelevant to me. 750€ (CPU, Mainboard, RAM) is simply too much for 30% better performance in 720p (and maybe 5%-10% in best case scenarios in 1440p).
 

thelastword

Banned
Hmm looks like most of the original testing wasn't hitting Ryzens full clock speeds due to a driver error? The fix to the WHEA error and the bios make the chip boost to 4.65 while it was struggling to get to 4.6 before. So things will look even better.

It's the only issue I saw...I was wondering why all the reviewers weren't hitting the stated boost clocks...Joker said he only got to 4.4 with 1.4+ volts and things were getting hot, so hopefully this fix comes out soon....The CCX problem is also something they should fix, so basically still lots of performance on the table, but people are so satisfied with Ryzen's performance, even without these issues being addressed yet.....So it bodes well.....

Because of the many cores i wonder if disabling SMT and hyperthreading for games is going to result in far more performance then all those benchmarks really push out.

Let's be honest here is there any game that uses more then 8 cores anyway?
I suspect we still have some HT/SMT issues with windows or maybe bioses…..I've realized in many tests the 9700k was beating the 9900k too, because 9700k has no HT...…...So I figured, why don't they do non SMT benches with Ryzen 3000...….

In the Hardware Unboxed review, at a clockspeed of 4.0Ghz on the 9900k the 3700X/3900X, the Ryzen chips has a higher Single Core result over the 9900k, yet, that uplift does not translate to game performance when benching at that frequency....So a few niggles to iron out for sure...
 

thelastword

Banned
Love me some Optimum, but I think something is wrong here...Even when he OC's he gets less performance......I think somebody should really do a 3900X benchmark with SMT OFF

 
People are starting to notice that the 3900X doesn't seem to be touching the specified single-core 4.6 ghz boost clock ever. I've never heard of a CPU which doesn't ever reach it's specified boost clock before, hopefully this is just a BIOS issue as some have surmised and it's not something less benign. AMD better not have been caught lying about it's specs.

The Reddit thread about the BIOS issue is here:


Strange how none of the reviewers who had their reviews out on day 1 seem to have noticed something as obvious as the CPU not ever reaching it's specified boost clock...
 
Last edited:

thelastword

Banned
I think quite a few reviewers mentioned the boost clocks, that's why it's getting some attention now and AMD is looking to fix the issue....
 
People are starting to notice that the 3900X doesn't seem to be touching the specified single-core 4.6 ghz boost clock ever. I've never heard of a CPU which doesn't ever reach it's specified boost clock before, hopefully this is just a BIOS issue as some have surmised and it's not something less benign. AMD better not have been caught lying about it's specs.

The Reddit thread about the BIOS issue is here:


Strange how none of the reviewers who had their reviews out on day 1 seem to have noticed something as obvious as the CPU not ever reaching it's specified boost clock...


Gamers Nexus did not seem to find anything wrong?

miJS3pB.png
 
Gamers Nexus did not seem to find anything wrong?

miJS3pB.png
Yup, I saw that as well. There are apparently multiple boards and BIOS revisions floating around, reviewers ended up on different BIOSes (and of course manufactuers supplied different boards and did/didn't supply BIOS revisions). This guy actually talks about his experiences here:
For me, it was a bit of a mixed bag, perhaps colored by early experiences with the initial board UEFI. Subjectively (again, sample size of 1 here), the max clock speeds for the 3700X was around 4 to 4.1ghz all-core boost in almost all scenarios (even AVX workloads!) and 4.42 1t boost in most 1t workloads.
PBO generally improved the all-core clocks, but the 1t threads either did not improve, or regressed. I am hoping that this gets a little better with more updated UEFIs. Enabling Automatic Overclocking usually resulted in a minor performance regression, even with only a 25mhz bump.
This is also perhaps a loss on the silicon lottery side of things.
To be clear, at stock speeds this is still an incredibly impressive chip. My experience with Ryzen 1000 and Ryzen 2000 series chips was that AMD was incredibly good at binning their chips and that, generally, one just couldn't do better than just letting PBO do its thing.
My experience with the 3900x was somewhat different. For the 3900X, which is a beastly 12-core chip, I was finally able to achieve 4.5ghz all-core overclock stable (even for AVX workloads!) at 1.45v with a manual overclock and a lot of fiddling. PBO, Ryzen Master and UEFI settings to enable Overclocking in Precision Boost Overdrive + Auto Overclock didn’t really do quite as well, but I would occasionally see clocks of about 4.55ghz from the 3900X with everything opened up and PBO enabled. Like the 3700x, the 3900x was a little "shy" about clocking up to 4.6ghz, the Max boost clock printed on the box, especially on the initial UEFI versions. Later UEFI versions did improve this, and auto overclocking of +75mhz did seem to work.

It does look like a BIOS issue and may be resolvable as BIOS updates filter out to retail boards. AMD did very aggressively bin these CPUs, they are running basically as fast as they realistically can and overclocking is limited, but it does seem that they can reach their specified boost clocks.

It's also probably why the 3950X isn't coming until September, it's going to take AMD 2 months to collect enough chiplets that can hit 4.7 ghz before they can release it.

7nm is a new process for AMD so I guess I'm not surprised these first-gen Ryzen 3000 CPU's are already running near or at their limit. Anyone looking to challenge Intel at 5 ghz where the average 9900K lives had better be on LN2 because it's not happening otherwise.
 
Last edited:

Kenpachii

Member
Gamers Nexus did not seem to find anything wrong?

miJS3pB.png

It seems to be a thing. Anandtech has a new bios update and see's massive improvements.

Also that picture of yours the cpu sits mostly on 4.2 pretty much the blue line, so he also has those issue's.

D-8fA7DXUAAmQUf
 
Last edited:
Top Bottom