• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD's Ryzen 3000 Review Thread: More Cores, More Threads.

While it's nice that with the CPU alone you can play your games and stream without much issue on Ryzen, GPU encoders have already gotten extremely good and there's very little performance loss as well.

The Navi gpu encoders seem to be pretty shitty atm in comparison to Nvidia. Makes me wonder if it's that way simply because AMD would rather sell that aspect of their CPU lineup.. which obviously excels at multi core applications and multi-tasking.
 

JohnnyFootball

GerAlt-Right. Ciriously.
While it's nice that with the CPU alone you can play your games and stream without much issue on Ryzen, GPU encoders have already gotten extremely good and there's very little performance loss as well.

The Navi gpu encoders seem to be pretty shitty atm in comparison to Nvidia. Makes me wonder if it's that way simply because AMD would rather sell that aspect of their CPU lineup.. which obviously excels at multi core applications and multi-tasking.
The reality is that anyone with Navi and/or Ryzen 3000 at this moment are beta testers. The BIOS/software will probably stabilize in a month or two.
 
Last edited:
The reality is that anyone with Navi and/or Ryzen 3000 at this moment are beta testers. The BIOS/software will probably stabilize in a month or two.
Well, it's like that EposVox guy said in his video... AMD hasn't ever really worked on improving their GPU encoder before.. and really doesn't have any reason to believe they will here.

But you're right, they could in the future.
 

thelastword

Banned
While it's nice that with the CPU alone you can play your games and stream without much issue on Ryzen, GPU encoders have already gotten extremely good and there's very little performance loss as well.

The Navi gpu encoders seem to be pretty shitty atm in comparison to Nvidia. Makes me wonder if it's that way simply because AMD would rather sell that aspect of their CPU lineup.. which obviously excels at multi core applications and multi-tasking.
At least whatever AMD product you have, you will be able to do top notch streaming...So it's a win win for them....


Yet for a long time, I heard Nvidia fans talking about encoding, but boy is Navi a beast with Hevc.........6 streams in simul...and not a word not a word not a word.....
 
While it's nice that with the CPU alone you can play your games and stream without much issue on Ryzen, GPU encoders have already gotten extremely good and there's very little performance loss as well.

The Navi gpu encoders seem to be pretty shitty atm in comparison to Nvidia. Makes me wonder if it's that way simply because AMD would rather sell that aspect of their CPU lineup.. which obviously excels at multi core applications and multi-tasking.
It's more correct to say that AMD has never had the kind of software competency that Nvidia has. Intel doesn't either, which isn't going to make it easy for them to enter the dGPU market. What people never seem to understand is that Nvidia is successful because of hardware AND software. And because they don't understand this, they are unable to replicate it.
 
It's more correct to say that AMD has never had the kind of software competency that Nvidia has. Intel doesn't either, which isn't going to make it easy for them to enter the dGPU market. What people never seem to understand is that Nvidia is successful because of hardware AND software. And because they don't understand this, they are unable to replicate it.
Just look at the opensource drivers for Radeon GPUs. Valve's RADV driver is better than AMD's AMDVLK. And now, Valve is working on the ACO shader compiler which has allowed GPUs to perform a little better over running on AMD's LLVM compiler.
 
Those gaming benches with the 3600 beating the 9900K. :messenger_tears_of_joy::messenger_beaming::messenger_sunglasses:

Yeah, a little (or a lot) disengenuous. Used Radeon VII only. He says they did "extensive" testing and try cherry picked a handful of tests that show the 9900k and 8700k far worse than they typically are? This coming from a 3700x owner.

Gotta keep these youtubers honest. They're doin' it for the clicks.
 
Yeah, a little (or a lot) disengenuous. Used Radeon VII only. He says they did "extensive" testing and try cherry picked a handful of tests that show the 9900k and 8700k far worse than they typically are? This coming from a 3700x owner.

Gotta keep these youtubers honest. They're doin' it for the clicks.
There's some tests with a 2080Ti too in FFXV which usually favors Nvidia.
 

thelastword

Banned
Yeah, a little (or a lot) disengenuous. Used Radeon VII only. He says they did "extensive" testing and try cherry picked a handful of tests that show the 9900k and 8700k far worse than they typically are? This coming from a 3700x owner.

Gotta keep these youtubers honest. They're doin' it for the clicks.
Yeah, a guy who did the tests many times because he said he couldn't believe what he was seeing...…..Then he used the 2080ti just after he used the Radeon 7, just to ensure that nobody would say what you just did, he carried out the same tests as he did for Radeon 7 and one of the titles was one that heavily favored Nvidia+Intel too...….

It's more correct to say that AMD has never had the kind of software competency that Nvidia has. Intel doesn't either, which isn't going to make it easy for them to enter the dGPU market. What people never seem to understand is that Nvidia is successful because of hardware AND software. And because they don't understand this, they are unable to replicate it.
Nvidia has over 11000 staff members dedicated to GPU's....AMD has 10000 staff members dedicated to GPU's, CPU's, APU's (+consoles)...…..NV is no longer leading in encoding.....So these AMD guys must be seriously talented and overworked I'm thinking.....I hope they get a good vacation after this launch, but I doubt, because they will be going right back to working on a new set of adrenalin drivers to up perf on the new RDNA architecture and fix the little niggles that comes with a launch......Hardworkers who are pushing amazing products and software, I must say.......The Fidelity Fx and RIS guys need a raise...The Radeon Chill and HBCC guy too.....The anti-lag guy just the same.....
 
Yeah, a guy who did the tests many times because he said he couldn't believe what he was seeing...…..Then he used the 2080ti just after he used the Radeon 7, just to ensure that nobody would say what you just did, he carried out the same tests as he did for Radeon 7 and one of the titles was one that heavily favored Nvidia+Intel too...….

Nvidia has over 11000 staff members dedicated to GPU's....AMD has 10000 staff members dedicated to GPU's, CPU's, APU's (+consoles)...…..NV is no longer leading in encoding.....So these AMD guys must be seriously talented and overworked I'm thinking.....I hope they get a good vacation after this launch, but I doubt, because they will be going right back to working on a new set of adrenalin drivers to up perf on the new RDNA architecture and fix the little niggles that comes with a launch......Hardworkers who are pushing amazing products and software, I must say.......The Fidelity Fx and RIS guys need a raise...The Radeon Chill and HBCC guy too.....The anti-lag guy just the same.....

He's purposefully gpu limiting in a lot of scenarios and the others must come down to testing error. Honestly not that hard to disprove.

Gamers nexus tested Tomb raider at the same resolution and medium settings. The 9900k won by a considerable margin (30fps). This dude had all equal at 125fps on high settings, clearly illustrating a gpu bottleneck and therefore, a completely pointless test.

These 'tests' really need to stop getting quoted/posted because they're the opposite of scientific and only provide a disservice to potential buyers who really should want the best, most accurate info of where the cpu stacks up in a purely cpu bound scenario

Again, I wanted the 3700x to dethrone the 9900k as much as anyone. I'm done with Intel after the 7600k and its short lifespan--screw 4 core. That the 3700x is an 8 core 16 thread that runs quite and cool, while mostly matching the performance of the 8700k is enough for me. I get the aforementioned benefits against a hotter, louder, more expensive chip that won't last as long in games next gen. It doesn't need to beat the 9900k for me to feel good about my purchasing decision.
 
Last edited:
I'm getting close to settling on this as my X570 board:

The X570 ranges from the 4 manufacturers currently offering boards have a lot of different distinguishing factors, but only 2 X570 boards in the $300 or lower bracket offer 3x NVMe M.2 connectors on board. These 2 are the Asrock X570 Taichi, and the Gigabyte X570 Aorus Ultra.

The Asrock has LED post codes which makes Buildzoid hard as a rock, but the Gigabyte generally has better BIOS support. Asrock is known for being very slow with AGESA version updates and early adopters absolutely want fast AGESA updates, especially since right now your AGESA version determines how much your CPU can boost above base clocks.

Neither ASUS nor MSI offer 3x NVMe connectors on a board in this price range. ASUS seems to give zero fucks at all, even their $700 board doesn't have 3x NVMe connectors.
 
He's purposefully gpu limiting in a lot of scenarios and the others must come down to testing error. Honestly not that hard to disprove.

Gamers nexus tested Tomb raider at the same resolution and medium settings. The 9900k won by a considerable margin (30fps). This dude had all equal at 125fps on high settings, clearly illustrating a gpu bottleneck and therefore, a completely pointless test.

These 'tests' really need to stop getting quoted/posted because they're the opposite of scientific and only provide a disservice to potential buyers who really should want the best, most accurate info of where the cpu stacks up in a purely cpu bound scenario

Again, I wanted the 3700x to dethrone the 9900k as much as anyone. I'm done with Intel after the 7600k and its short lifespan--screw 4 core. That the 3700x is an 8 core 16 thread that runs quite and cool, while mostly matching the performance of the 8700k is enough for me. I get the aforementioned benefits against a hotter, louder, more expensive chip that won't last as long in games next gen. It doesn't need to beat the 9900k for me to feel good about my purchasing decision.

I'm starting to think all these reviews running games at medium/low settings @1080p on a 2080Ti of all cards are useless. Sure you can see how much faster intel is in that scenario but who will play games at that res with a 2080Ti? I just saw a review where the 3700X is really competitive with the 9900K and it looks like the only thing the reviewer did was run the games on High/Ultra setting @1080p. So on theory there is already a GPU bottelneck just by enabling High settings. And lets say he ran the test with a GTX 1080/5700, there wouldn't even be a gap worth worrying about.



WIth Ryzen 3700X, gaming at high res/settings will be a smoother experience because you have idle threads, compared to a 9700K for instance where games like Assassin's Creed can 100% all 8 cores. And with games becoming more multi threaded by the day, those 8 cores are not going to be enough if you want to future proof.

If you are not going to go all out on a 9900K, Ryzen is a no brainer.
 
Last edited:

MadYarpen

Member
So after some time what do you guys think?

I have ryzen 5 2600 at the moment. Stock, I don't think I would OC it myself. It sits in B450 tomahawk from MSI.

Next year I will surely swap GPU, maybe for some Navi card (preferably with some ray tracing, so this would be likely "big" navi) or some RTX like 2700 super. At the moment I have 1080 p monitor, so ray tracing could be realistic possibility.

In the first place it is for Cyberpunk, but generally I'd like to have PC relevant to the next gen along PS5 which I will buy as well.

So my question is, should I buy new CPU with GPU, or Ryzen 2 will be enough for start, and could be swapped later? I assume 3700 or 3700X would be a way to go?
 

llien

Member
I'm starting to think all these reviews running games at medium/low settings @1080p on a 2080Ti of all cards are useless. Sure you can see how much faster intel is in that scenario but who will play games at that res with a 2080Ti? I just saw a review where the 3700X is really competitive with the 9900K and it looks like the only thing the reviewer did was run the games on High/Ultra setting @1080p. So on theory there is already a GPU bottelneck just by enabling High settings. And lets say he ran the test with a GTX 1080/5700, there wouldn't even be a gap worth worrying about.

The thought is "future proofing", e.g. if Intel is 10% faster at 720p on 2080Ti, it could, perhaps, be as much faster at 1440p high settings.

I have never seen it materialize, though, since game developers very rarely, if at all, dare bottlenecking on CPU.
 
So after some time what do you guys think?

I have ryzen 5 2600 at the moment. Stock, I don't think I would OC it myself. It sits in B450 tomahawk from MSI.

Next year I will surely swap GPU, maybe for some Navi card (preferably with some ray tracing, so this would be likely "big" navi) or some RTX like 2700 super. At the moment I have 1080 p monitor, so ray tracing could be realistic possibility.

In the first place it is for Cyberpunk, but generally I'd like to have PC relevant to the next gen along PS5 which I will buy as well.

So my question is, should I buy new CPU with GPU, or Ryzen 2 will be enough for start, and could be swapped later? I assume 3700 or 3700X would be a way to go?

I'd say wait until you upgrade your GPU and then see if your new GPU is being kept back by your 2600.
 

Shai-Tan

Banned
The thought is "future proofing", e.g. if Intel is 10% faster at 720p on 2080Ti, it could, perhaps, be as much faster at 1440p high settings.

I have never seen it materialize, though, since game developers very rarely, if at all, dare bottlenecking on CPU.

It's pretty much only relevant for high refresh rate, to some extent because games are designed with console cpu (and laptop cpu) in mind and those will always have lower frequency cores. What happens instead is lower core cpu suffer over time. For example 4 core hits a cpu wall in some more recent games and it's possible 6 core will hit a cpu wall in next gen games if the 8 core cpu in those consoles is sufficiently more powerful. Current gen console cpu have such low frequency it doesn't matter. Anyhow that's likely the only "future proofing" to worry about unless you have a specific interest in high refresh rate (including vr).

To be clear this is always a function of price as it is possible to get 10-15% higher lows/averages in some games with more expensive cpu in some scenarios but in almost all tests I do with options/resolutions turned up as one would like it's like 95-100% gpu limited i.e. excess money should be pumped into a better gpu if possible. Like just going to 1440p my 8700k vs my 2700x there's less than 5% difference in almost all games except Assassin's Creed Odyssey/Origins and that's the older 2700x not the new 3600/3700x/3900x (which also support faster memory). Meanwhile I actually play in 4k where I could be using an old quad core if I wanted - it would have near performance parity outside of a few games

It's possible next gen might be different considering some games last year left quad core behind (eg AC:O) and next gen will possibly utilize cpu to support ray tracing effects or whatever else. That's the only reason if you're buying now to overshoot just in case e.g. I can imagine if there are still 30fps games on console next gen that some might be more cpu demanding at 60fps
 

Irobot82

Member
HotHardware has an interview with Scott Herkelman and he is showing how steam surveys are skewed and isn't representative of actual market share of their CPU's and GPU's. It's a pretty interesting watch.
 

Kazza

Member
A comparison of 2400G and 3400G without a dedicated graphics card:




All at medium settings, 1080p (average FPS 3400G/2400G):
GTA5 56/55
Project Cars 2 43/38
Doom 52/48
CS Go 123/108
Rocket League 82/74
Skyrim Special Edition 36/32


Forza Horizon 4 53/46 (1080p Low)
Assassin's Creed Odyssey 32/24 (720p low)
Fortnite 73/61 (1080p, low)
PUBG 47/36 (very low, 1080p)
Rage 2 42/34 (low, 720p)

A fairly decent, if not spectacular, upgrade.
 

kraspkibble

Permabanned.
I'm getting close to settling on this as my X570 board:

The X570 ranges from the 4 manufacturers currently offering boards have a lot of different distinguishing factors, but only 2 X570 boards in the $300 or lower bracket offer 3x NVMe M.2 connectors on board. These 2 are the Asrock X570 Taichi, and the Gigabyte X570 Aorus Ultra.

The Asrock has LED post codes which makes Buildzoid hard as a rock, but the Gigabyte generally has better BIOS support. Asrock is known for being very slow with AGESA version updates and early adopters absolutely want fast AGESA updates, especially since right now your AGESA version determines how much your CPU can boost above base clocks.

Neither ASUS nor MSI offer 3x NVMe connectors on a board in this price range. ASUS seems to give zero fucks at all, even their $700 board doesn't have 3x NVMe connectors.
i got the X570 Aorus Master. absolute solid unit of a board. I love it! you'll love the Ultra. go for it.

also, if you hate the idea of the chipset fan (i do) then Gigabyte just put out a new BIOS that lets you customise it! it will even turn off if your system is cool enough. as far as i know the only other boards that do that is MSI. for me too having 3x nvme slots was a huge selling point. i only have 1 at the moment but NVME is the way to go as SATA is likely never gonna be updated so i plan on adding move drives.

i agree that Gigabyte has better BIOS support. I mean, it's not even been a week yet and Gigabyte put out that new BIOS because customers asked for it. Other makers can't even be bothered updating their list of supported cpu/memory products yet lol nevermind BIOS. I've seen Gigabyte's community rep communicate across different sites and that made me feel very confident going with them. also don't listen to anyone who says "huh gigabyte...great boards but SHIT bios!!!!!!!" the bios on these X570 boards are great.
 
Last edited:

JohnnyFootball

GerAlt-Right. Ciriously.
I guess then I change my order to a 600W PSU, specially because the CPU would be running on load a lot and in the Future I might have to do stuff with VR and a RX570 maybe won't cut it...
Just curious are you buying all 3 at once? If you are, it would be many many many times more beneficial to drop to something like a 3700X or even a 3600 and get a better video card that an RX 570.
 

llien

Member
A comparison of 2400G and 3400G without a dedicated graphics card:




All at medium settings, 1080p (average FPS 3400G/2400G):
GTA5 56/55
Project Cars 2 43/38
Doom 52/48
CS Go 123/108
Rocket League 82/74
Skyrim Special Edition 36/32


Forza Horizon 4 53/46 (1080p Low)
Assassin's Creed Odyssey 32/24 (720p low)
Fortnite 73/61 (1080p, low)
PUBG 47/36 (very low, 1080p)
Rage 2 42/34 (low, 720p)

A fairly decent, if not spectacular, upgrade.

I hate that AMD has 3xxx series CPU that is NOT 7nm.
 

llien

Member
Is a 450W PSU Gold+ enough to power a Ryzen 9 3900x + Radeon RX570 + 1 SSD Drive?

RX570 or 5700?
Regardless (as, curiously, 5700 consumes less than 570), if it is a quality PSU, it should be enough, unless you plan to do heavy OC-ing.
 
Last edited:

gundalf

Member
Just curious are you buying all 3 at once? If you are, it would be many many many times more beneficial to drop to something like a 3700X or even a 3600 and get a better video card that an RX 570.

I bought the GPU already (I got it cheap for 100€) and the rest of the system is on back-order until the 3900x is back in stock (in 2 weeks according to the shop). Money is not an issue here, I mean if spent 200€ mores and I save 10 minutes a day with compiling stuff, then it pays back very quickly. That is under the assumption this system is good enough for the next 3~4 years.

RX570 or 5700?
Regardless (as, curiously, 5700 consumes less than 570), if it is a quality PSU, it should be enough, unless you plan to do heavy OC-ing.

RX and no OC as I need the system to be rock stable :)
 
Last edited:

llien

Member
RX and no OC as I need the system to be rock stable :)
Ahem, both 5700 and 570 are RX, but I read from the rest of your comment that it is 570.

3900x with 2080Ti (which consumes 100W more than 570) are at around 385W total system power consumption in TPU test, so you system would be at around 250-270W, quite comfortably within 450W.

CPU alone cannot push you further than that either.

Also note that most PSU's peak efficiency is at around 60-80% of it's max power rating, so overkill on max power means somewhat higher power consumption, as PSU efficiency drops.
 

Shai-Tan

Banned
I guess then I change my order to a 600W PSU, specially because the CPU would be running on load a lot and in the Future I might have to do stuff with VR and a RX570 maybe won't cut it...

there's various power supply calculators you could try. I would tend to go 600+ just because it might get you more reuse in the future. i've reused a lot of my psu between builds over the years. if you get a 400 it's like obsoleted the moment you get a more powerful video card, want to overclock, load a system with a bunch of disk drives, etc
 

Kenpachii

Member
3600 gets close to 3700x in a number of benchmarks, and the latter beats properly patched 9900k in a number of games.

Never going to happen. That guy has a crusade against intel and his benches showcases it.
 
Last edited:

thelastword

Banned
3600 gets close to 3700x in a number of benchmarks, and the latter beats properly patched 9900k in a number of games.
How many of these reviews are based on 5.2 Ghz delidded 9900K's with full blown Spectre (aka, unpatched)…..Would be an interesting tidbit to know.....
 

thelastword

Banned
That and the 3000 series APUs still use Vega.
Didn't the same thing happen with prior APU's, when the 12nm 2600 series came out, the APU's were still on 14nm....I think there's reasoning there......They can't put APU's out the same time they land a new node shrink, that takes time and besides they don't want people just buying a tonne of APU's and not buying their GPU's....
 

Kenpachii

Member
index.php


Pci-e 4.0 has some godly read and write rates.

Currently got 3500/3200 and dam its fast. AC origin has like 1 second load times or something. Probably the best feature of 4.0.

Finally BDO horses no longer hang in the air when you move to fast through enviroment.
 
Last edited:
9900K vs. 3900X for gaming + streaming:



12c/24t is better than 8c/16t for streaming, in unrelated news water is wet.
But when you add the GPU and let the Shadowplay do it, well that's vastly superior to both CPU's trying to do it.
Moral of the story: Use your GPU's video encoder for streaming! But if you don't for some reason.....3900X wins.
 
9900K vs. 3900X for gaming + streaming:



12c/24t is better than 8c/16t for streaming, in unrelated news water is wet.
But when you add the GPU and let the Shadowplay do it, well that's vastly superior to both CPU's trying to do it.
Moral of the story: Use your GPU's video encoder for streaming! But if you don't for some reason.....3900X wins.


I know little of video encoding. He's using a 2080 Ti for these tests isn't he? How's GPU video encoding on cheaper cards like 580, 1070, 1070 Ti or Vega 56 for example?
 

thelastword

Banned
index.php


Pci-e 4.0 has some godly read and write rates.

Currently got 3500/3200 and dam its fast. AC origin has like 1 second load times or something. Probably the best feature of 4.0.
The tech media said PCIE 4 had no benefits for gaming, WRONG......….Apart from loadtimes, streaming etc.., Imagine what's possible with HBCC now or a 2.0 version of that....When devs start using more advanced physics simulations in their games and I think it will be very important for when raytracing becomes viable......However, in the here and now, PCIE4 is proving to be the driver of extreme performance in these new AMD products, that and Game Cache...


Everyone should take a look at this video to understand why....They will have detailed videos on every aspect they discussed, streaming, encoding.......I think EPOS even said he created a new benchmark to test game streaming, he was not satisfied with what existed before, because they were innacurate.....Great watch below....

 

kraspkibble

Permabanned.
really enjoying my 3700X so far. haven't really stressed it yet or messed about with overclocking though... probably do benchmarks and stuff tomorrow. i've been up since 3AM today (12 hours so far). new cpu, new motherboard, new psu, new hard drives, clean installation of windows. ran into a few issues but got them sorted now.
 
I know little of video encoding. He's using a 2080 Ti for these tests isn't he? How's GPU video encoding on cheaper cards like 580, 1070, 1070 Ti or Vega 56 for example?
The video encoder block is always the same (when present) on a generation of GPU's. So all Pascals have the same encoder block, all Turings, etc.

I know nothing about the AMD GPU's, but on the Nvidia side you've been able to use Shadowplay for low-overhead streaming and recording using the NVENC hardware encoder for many years now going back to the Kepler generation. The quality of the hardware encoding used to be much worse and has steadily improved every generation to the point where you can now use Shadowplay for both streaming and recording with very good quality, comparable to x264 run on the CPU.
 
Top Bottom