• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Alder Lake Gaming Efficiency Tested. The Most Efficient Gaming Architecture.

Leonidas

Member
Who games in 720p? If the 12th gen is DDR5 why in the world would you pair it with DDR4 3200?

But I agree the 12600K is the CPU. Intel scored greatly with it.

Edit: that's quite an in-depth review, thanks for the link!
Check igor for higher resolutions. Igor's test shows that even at higher resolutions Alder Lake is still the most efficient CPU gaming architecture by similar margins compared to low res. Alder Lake actually seems to benefit slightly as you go up in resolution as the CPU is stressed even less.

Lower resolutions is the worst case for CPU power consumption and it reduces GPU bottlenecks.
 
Last edited:

spyshagg

Should not be allowed to breed
Okay. What timeframe are you talking about? In the far off future? Or within a timeframe that a gamer buying a high-end CPU gives a crap about?

“the other CPU/GPU might be better right now, but my team’s CPU/GPU will come out ahead over time” has always been the last line of defense for fanboys and is almost always fantasy bullshit.

I’m willing to bet on it. For the next 3 years, we will not reach a point where average Zen 3 gaming perf/watt for games released that year is better than Alder Lake at 1440p. (Or heck, just compare total system power consumption if that makes easier).

I’ll change my avatar to a doughy pink haired feminist and insist everyone call me “they/them” if I’m wrong.
whose team? what? fanboys who? you live in a weird world man.

Outside of little meaningless internet wars about chipmaker brands, there are such things as mathematics logic inference etc, which are used to predict future behaviors. Alder lake performance per watt in gaming is half of what their chip uses when fully loaded. Games tend to use more cpu as time goes on. Its a statistical truth. You can infeer that perf watt will get worse = in gaming! in other software its already terrible.
 
Last edited:

PhoenixTank

Member
Famous youtuber Jayz2cents shits on leonidas


Totally slams intel for their powerdraw.
You've entirely missed the point - nothing there dumps on anything Leo has said in this thread regarding gaming.

Entirely fair to post this thread & set the record straight on Alder Lake's power usage during non production workloads like gaming, especially given the posts we've had. They've done well on the production side to counter AMD albeit at high wattage, but they were also reasonably far behind before and are doing so at a significantly lower price. I'd like to see some production comparisons with PBO enabled for Zen3 against Alder Lake given similar cooling constraints.

That said: I'm a little underwhelmed by the performance improvements in gaming versus Zen 3.
A year later, entirely new node for desktop, 4 microarchitectures down the line from Skylake and going on Igor's numbers a 12900KF is ~2-4% faster than a 5900X @ 1080p?
Far Cry 6 sits as an outlier at 16-17% faster. Notable and closer in line with some of the IPC increases but not exactly a well optimized game. It gives me pause - need to see more benches in more games.
Room for refinement with the 13th gen I imagine, but this time around most of the big talk from Pat Gelsinger, in retrospect, seems to have been aimed at HEDT & server space, IMHO.

The 12600K is a lot more interesting for its place in the market. Looks like right powerful little beauty and has to force price adjustments on AMD's side for midrange too.

I think they're going to do really well with this on the laptop side, eating away at the inroads AMD have made there.

whose team? what? fanboys who? you live in a weird world man.

Outside of little meaningless internet wars about chipmaker brands, there are such things as mathematics logic inference etc, which are used to predict future behaviors. Alder lake performance per watt in gaming is half of what their chip uses when fully loaded. Games tend to use more cpu as time goes on. Its a statistical truth. You can infeer that perf watt will get worse = in gaming! in other software its already terrible.
They've made a few posts now basically attacking a generalisation of an AMD fan, as if they're all some kind of amorphous blob or hivemind, rather than addressing the individual's points. 🤷‍♂️ Internet forums, eh?

As for scaling you're right and you're wrong. AFAIK many of these power hungry production workloads are a lot more AVX/AVX2 heavy than games are on a per core basis. Games themselves would need to more heavily utilise these instruction sets for it while scaling out to cores for this worst case to hold true.
 

Leonidas

Member
whose team? what? fanboys who? you live in a weird world man.

Outside of little meaningless internet wars about chipmaker brands, there are such things as mathematics logic inference etc, which are used to predict future behaviors. Alder lake performance per watt in gaming is half of what their chip uses when fully loaded. Games tend to use more cpu as time goes on. Its a statistical truth. You can infeer that perf watt will get worse = in gaming! in other software its already terrible.
Take his bet then. If no game pushes 12900K to it's 241W limits any time soon your argument is moot.
 
Last edited:

spyshagg

Should not be allowed to breed
Take his bet then. If no game pushes 12900K to it's 241W limits any time soon your argument is moot.
Everything is a game to some people.

It doesn't need to go to 241 to prove the point. It only needs to pass ryzen 3. ITs only a little.
 

spyshagg

Should not be allowed to breed
Post again when you have actual data to back up your claims.

It already happens in the lower end models.



 

SZips

Member
Looks good. I bought a Ryzen 9 5900X a couple of months ago now and like it. It was a nice upgrade from my i7-6700K. If Intel failed to surpass a CPU lineup that came out literally a year ago, then something would be seriously amiss with that company.

I fully expect to see the Zen 4 snatch the crown back in 2022. Hell, I wouldn't be surprised if the more immediate release of the 3D Zen 3 chips bridged the gap.
 

spyshagg

Should not be allowed to breed
The only way to test how much each core really consumes, is to disable most of the cores in the BIOS, and only enable the cores the game you will test actually uses.

Igor charts only demonstrate that when all cores are enabled (as they should), half of the 12900K are "efficient" cores and all of Ryzen cores are performance cores. So these numbers become distored in the perf per watt.
 
Last edited:

Leonidas

Member
It already happens in the lower end models.
There are always going to be some games where AMD does better, igor's results I linked to in the OP showed that...

CyberPunk is a game where AMD does better than usual in power consumption.

Here's a chart from CapFrameX showing this
(it's too bad the chart below doesn't include 5600x/5800x)

Some games(including Cyber Punk) power consumption gets blown up by DDR5 (here 12900K gained 28 watts).

HUB paired it with even faster DDR5 which might have pushed that number even higher...
 
Last edited:

spyshagg

Should not be allowed to breed
There are always going to be some games where AMD does better, even igor's results show that...

CyberPunk is a game where AMD does better than usual in power consumption.

Here's a chart from CapFrameX showing this
(it's too bad they didn't test 5600x/5800x)

Some games(including Cyber Punk) power consumption gets blown up by DDR5 (here 12900K gained 28 watts).

HUB paired it with even faster DDR5 which might have pushed that number even higher...
it wasn't only some games. Its the average of all testing. Check the graphs again.

If you dont understand the architecture and why the perf per watt per core shows those results, yes you would assume Alder lake is more advanced per core, yet it isn't really. The benefit comes from half its cores being efficient.
 

Leonidas

Member
it wasn't only some games. Its the average of all testing. Check the graphs again.

If you dont understand the architecture and why the perf per watt per core shows those results, yes you would assume Alder lake is more advanced per core, yet it isn't really. The benefit comes from half its cores being efficient.
If that is the case then that is further proof it is a better architecture for gaming. High performance cores shouldn't be wasted when an efficient core can get the job done. Bodes well for hybrid architectures on PC going forward.
 
Last edited:

spyshagg

Should not be allowed to breed
If that is the case then that is further proof it is better architecture for gaming. High performance cores shouldn't be wasted when an efficient core can get the job done. Bodes well for 13th Gen and beyond.
It proves its better in the sense of laptops and products with battery life.

On the Desktop its not really better. As games increase their CPU load, the wattage will start to spill.

Realize that If they could put 16 performance cores in that package, they absolutely would. The efficient cores are there because otherwise you would not see 240watts, you would see 300 plus. Ryzen 5950X package is around 150watts with all performance cores. Its a monumental difference.
 

Leonidas

Member
It proves its better in the sense of laptops and products with battery life.

On the Desktop its not really better. As games increase their CPU load, the wattage will start to spill.
More facts, less speculation please.

Today it's the most efficient gaming architecture. That's with these 150/190/241 W rated parts.

Non-K SKUs are launching soon, which will be rated at much lower wattage. Even a 6+0 variant (12400) that will offer interesting comparisons to the 5600x, I'll wait for facts to come out...
 

spyshagg

Should not be allowed to breed
More facts, less speculation please.

Today it's the most efficient gaming architecture. That's with these 150/190/241 W rated parts.

Non-K SKUs are launching soon, which will be rated at much lower wattage. Even a 6+0 variant (12400) that will offer interesting comparisons to the 5600x, I'll wait for facts to come out...

Who put you on the throne of rightfulness? Keeps your demands to yourself.

I stated facts and presented graphs, that you would find yourself if you looked, even if you don't understand what you see.
 

Leonidas

Member
I stated facts and presented graphs, that you would find yourself if you looked, even if you don't understand what you see.
You presented a power consumption graph for Cyber Punk which is one game which not only favors AMD in power consumption, but also the 12600K in your graph was running on high speed DDR5, which blows up power consumption in many instances. I'd rather HUB tested with DDR4 too, as that is what I use, and CapFrameX shows there is a 28 watt increase on DDR5 compared to DDR4 on the 12600K in CyberPunk...
 
Last edited:

spyshagg

Should not be allowed to breed
You presented a power consumption graph for Cyber Punk which is one game which not only favors AMD in power consumption, but is one of a few known games which blow up DDR5 CPU power consumption. I'd rather HUB tested with DDR4 too, as that is what I use, and CapFrameX shows there is a 28 watt reduction in power in CyberPunk going to DDR4.
Pick any game. Variance will not defeat the point being made here.

Alder lake P-core is not much more efficient than ryzen 3 cores. The perf/watts savings, once again, comes from the fact half its cores are E-cores.

The 5600X vs the 12600K clearly demonstrates that the fewer E-cores Alder Lakes has, the less savings it can get, making them almost equal in every graph. Igor graphs show this, and you have to understand why in the 12900K case shows much lower watts than the 5950X.
 

Leonidas

Member
Pick any game. Variance will not defeat the point being made here.

Alder lake P-core is not much more efficient than ryzen 3 cores. The perf/watts savings, once again, comes from the fact half its cores are E-cores.

The 5600X vs the 12600K clearly demonstrates that the fewer E-cores Alder Lakes has, the less savings it can get, making them almost equal in every graph. Igor graphs show this, and you have to understand why in the 12900K case shows much lower watts than the 5950X.
CapFrameX shows the 12600K with only 4 e-cores, being the most efficient CPU.

There's too much variance. Your graph shows high speed DDR5 which probably blew up 12600K consumption numbers.

igor's numbers have DDR5 too, which seemingly negatively impacted Alder Lake's results, though it still did come out on top.

CapFrameX is the only one with a true apples to apples comparison of DDR4 across the board.

I'll wait for more such tests. I'll leave the speculation to you.
 

spyshagg

Should not be allowed to breed
I dont need your approval. The results are there to be seen. Some more tests will appear next week that illustrate what is obvious.
 

Leonidas

Member
The results are there to be seen.

Agreed. The results are there to be seen with CapFrameX's like for like DDR4 comparison (now included in OP).



There is a 59% difference between gaming efficiency with 12600K and 5900X in these results. Some Zen3 CPUs fare a bit better, but it'll never make up that 59% gap, unless you do an apple to oranges comparison.
 

nemiroff

Gold Member
This is super impressive. All we’ve heard is “OMG MORE THAN DOUBLE the power consumption of Zen 3!!!” But for gaming scenarios it actually consumes less than Zen 3 while also delivering better performance.

No it doesn't, AFAIK. I saw a video from Jay demonstrating the powerdraw yesterday. It is consistently 100W more than the 5900x when gaming. It is however lower for normal desktop usage because of the e-cores.
 
Last edited:

Leonidas

Member
No it doesn't, AFAIK. I saw a video from Jay demonstrating the powerdraw yesterday. It is consistently 100W more than the 5900x when gaming. It is however lower for normal desktop usage because of the e-cores.
Link timestamped vid plz
 

nemiroff

Gold Member
Link timestamped vid plz

Wait, I suddenly realized he didn't do gaming but pushed the CPUs to -100%. That is good for CPU intensive games, but not every game run at full CPU load, so that will change some graphs.. My wrong
 
Last edited:

spyshagg

Should not be allowed to breed
Agreed. The results are there to be seen with CapFrameX's like for like DDR4 comparison (now included in OP).



There is a 59% difference between gaming efficiency with 12600K and 5900X in these results. Some Zen3 CPUs fare a bit better, but it'll never make up that 59% gap, unless you do an apple to oranges comparison.

Do you even understand why some Zen3 CPUs fare better?


No Zen3 CPU has very small slow E-cores to increase the core count while adding little to nothing to the overall wattage. But reviews are giving those E-cores the same weight as full P-cores when calculating Perf per watt per core.


The logic you are missing is that a hypothetical Alder Lake CPU with only 6 P-cores + 10 E-cores would absolutely destroy a 16 P-cores + 0 E-cores CPU in Perf/watt/core in GAMING, while being categorically WORSE. The multithreaded benchmarks expose this FACT.
 

Soosa

Member

The efficiency graphs are at the bottom of the following pages (Average CPU Watts by Frames Per Second)

720p https://www.igorslab.de/en/intel-co...ng-in-really-fast-and-really-frugal-part-1/3/
1080p https://www.igorslab.de/en/intel-co...ng-in-really-fast-and-really-frugal-part-1/5/
1440p https://www.igorslab.de/en/intel-co...ng-in-really-fast-and-really-frugal-part-1/7/


Good to see a place testing gaming power consumption.

Alder Lake has lower watts per frame than Zen3 in gaming. An Alder Lake CPU is topping these charts for gaming efficiency most of the time. The only Zen3 CPUs that come close to Alder Lake in terms of gaming efficiency are the 5600x & 5800x, while the high core count Zen3 CPUs have gaming efficiency closer to Rocket Lake...

Looks like Alder Lake is not only the fastest gaming CPU architecture, but also the most efficient CPU for gaming.

Found another source.

From CapFrameX Review (7/10 different games from igor, different memory setup too)


Seems that efficiency increases further when you use DDR4.
So what?

If I want the car with best acceleration, will I buy the car with the lowest fuel consumption/acceleration ratio?

This just sounds like some kind of smoke&mirrors campaing.

If IIntel cant win on pure gaming performance -> lets focus on fps/watt to make think that this CPU family is now the best one for gaming.

I mean, sure, if you want to save some pennies on electricity bill, then watt/fps is one thing to look at.

But if you want best/fastest cpu for gaming, then you just watch the fps or fps/money ratio.

On some reviews AMD still had best price/fps cpus, so I'm not 100 % sure which one is now the king on that category.

But it just sounds weird that this one FPS/WATT test now makes people rush to buy Intel?

I have read cpu reviews for decades and this is the first time I remember seeing people using fps per watt as really important feature to choose their gaming cpu. Just weird. It always have been fps per dollar before basically.

Now if it is best fps/watt + best fps/price + more than enough fps combined on single cpu, then it is a good deal. But Watt/fps tells nothing.

Lets say there is cpu that outputs 10fps with 0.000000000000000000000001W, but cant output more than 10 fps. So will people rush to buy this one, because it have the epic fps/watt ratio?
 

Kenpachii

Gold Member
Its funny how some people get triggered by Leonidas posts yet he's completely right.

So what?

If I want the car with best acceleration, will I buy the car with the lowest fuel consumption/acceleration ratio?

This just sounds like some kind of smoke&mirrors campaing.

If IIntel cant win on pure gaming performance -> lets focus on fps/watt to make think that this CPU family is now the best one for gaming.

I mean, sure, if you want to save some pennies on electricity bill, then watt/fps is one thing to look at.

But if you want best/fastest cpu for gaming, then you just watch the fps or fps/money ratio.

On some reviews AMD still had best price/fps cpus, so I'm not 100 % sure which one is now the king on that category.

But it just sounds weird that this one FPS/WATT test now makes people rush to buy Intel?

I have read cpu reviews for decades and this is the first time I remember seeing people using fps per watt as really important feature to choose their gaming cpu. Just weird. It always have been fps per dollar before basically.

Now if it is best fps/watt + best fps/price + more than enough fps combined on single cpu, then it is a good deal. But Watt/fps tells nothing.

Lets say there is cpu that outputs 10fps with 0.000000000000000000000001W, but cant output more than 10 fps. So will people rush to buy this one, because it have the epic fps/watt ratio?

Multiple things go wrong with your post.

1) thinking that 12000 series from intel is slower for gaming
2) money ratio isn't important for people that want the best, they spend no matter what.
3) leonides is just comparing performance for fps which is what u would do if you compare, because one chip is faster otherwise there is no comparison. It has nothing to do if you are interested or anybody else in buying the chip for such argument, its just to proof his point.
4) people use all kinds of metrics to make statements some are true, some are misleading, some are right, some are idiotic.

For me as example. in most games i play i only see 20% cpu taxation at best. yet i want a faster CPU. why? single core performance. Most important metric. After that wattage with gaming, because i don't want to sit in a airco in the summer all day long, so x amount of wattage is acceptable for me. And the cpu has to come with a x amount of cores on it for compatibility.

  • So any benchmark that includes gpu limited scenario's to skew end results is instantly dismissed.
  • Any benchmark that doesn't involve RT performance with there cpu benchmarks in games that push CPU to its limits, is pointless to me.
  • Any benchmark that benches a game that does push single core performance, but gets 2x the results of what my CPU does which isn't 2x slower then what 12900k, its uninteresting to me ( because they test area's that are useless ) example cyberpunk no rt 1080p ultra settings = gpu bottleneck, cyberpunk rt 1080p ultra settings with a 3090 = cpu bottleneck.
  • After that wattage in gaming, need to be showcased in modern games. not syntetic benchmarks, not applications that push the cpu to 100% like timespy. its useless to me.

The problem what many people have here is that they watch benchmarks that contain all the above and it skews the results massive to the point u get statements of "oh well its only 3% faster" because ueless benchmark dude benchmarked a bunch of useless metrics and came to even more skewed data as result and that's it guys. Its tiring.

I can tell you this, when mcm cards arrive and the next nvidia gpu arrive cpu bottlenecks which is already a thing will be mighty strong even at 4k specially under RT at 1440p, if those cards doulbe if not tripple the performance ( unlikely the last part )
 
Last edited:

spyshagg

Should not be allowed to breed
Its funny how some people get triggered by Leonidas posts yet he's completely right.



Multiple things go wrong with your post.

1) thinking that 12000 series from intel is slower for gaming
2) money ratio isn't important for people that want the best, they spend no matter what.
3) leonides is just comparing performance for fps which is what u would do if you compare, because one chip is faster otherwise there is no comparison. It has nothing to do if you are interested or anybody else in buying the chip for such argument, its just to proof his point.
4) people use all kinds of metrics to make statements some are true, some are misleading, some are right, some are idiotic.

I only care for single core performance and gaming wattage, nothing else for example.

You go for the fastest, period. That we both can agree with.


But that is not the discussion we are having here. The discussion, as per OP article, is that Alder Lake "destroys" Zen 3 in perf per watt per core, in gaming.


The caveat and the elephant in the room, is that the article is WRONG.


1) they are dividing performance by all cores available on Alder Lake, which includes E-cores, which are tiny slow and consume little.

2) Even by cheating the calculations by assuming E-cores are the same as P-cores, wattage goes berserk on Alder Lake when loads increase, which is the undeniable trend for the last 8 years in games.
 
Last edited:

Kenpachii

Gold Member
You go for the fastest, period. That we both can agree with.


But that is not the discussion we are having here. The discussion, as per OP article, is that Alder Lake "destroys" Zen 3 in perf per watt per core, in gaming.


The caveat and the elephant in the room, is that the article is WRONG.


1) they are dividing performance by all cores available on Alder Lake, which includes E-cores, which are tiny slow and consume little.

2) Even by cheating the calculations by assuming E-cores are the same as P-cores, wattage goes berserk on Alder Lake when loads increase, which is the undeniable trend for the last 8 years in games.

Where is the article i can't find it?
 

Kenpachii

Gold Member
First post of this thread.

Readed his benchmark, frankly he does the best cyberpunk benchmark i saw so far from anybody doing it. still improvements can be made there, ascent was a bit dissapointing. He should have benched with a nvidia gpu at max settings there on 1080p or lower anyway.

Does the chart not state 3 versions of 12900k

1) ddr4 3700 241/241w
2) ddr4 3200 ( p cores )
3) ddr4 3200 241/241w

Not gona lie i find his shit hard to read in that chart. But does that not say he tested with p-cores only which basically means the other cores are not used or taken into account.

No clue if u can even disable the e-cores, i assume u can.
 
Last edited:
So what?

If I want the car with best acceleration, will I buy the car with the lowest fuel consumption/acceleration ratio?

This just sounds like some kind of smoke&mirrors campaing.

If IIntel cant win on pure gaming performance -> lets focus on fps/watt to make think that this CPU family is now the best one for gaming.

I mean, sure, if you want to save some pennies on electricity bill, then watt/fps is one thing to look at.

But if you want best/fastest cpu for gaming, then you just watch the fps or fps/money ratio.

On some reviews AMD still had best price/fps cpus, so I'm not 100 % sure which one is now the king on that category.

But it just sounds weird that this one FPS/WATT test now makes people rush to buy Intel?

I have read cpu reviews for decades and this is the first time I remember seeing people using fps per watt as really important feature to choose their gaming cpu. Just weird. It always have been fps per dollar before basically.

Now if it is best fps/watt + best fps/price + more than enough fps combined on single cpu, then it is a good deal. But Watt/fps tells nothing.

Lets say there is cpu that outputs 10fps with 0.000000000000000000000001W, but cant output more than 10 fps. So will people rush to buy this one, because it have the epic fps/watt ratio?
ok first of all, the reason I care about CPU power consumption isn’t because of the few pennies worth of electricity, it’s because of the increased heat and the noise/cooling capacity required to keep the temps under control.

Most of us were expecting that Alder Lake would deliver the best gaming performance but also that it would consume dramatically more power than Zen 3 (like, on par with Rocket Lake or worse). Personally I wouldn’t have been willing to make that trade off.

THAT is why these measurements are interesting. Nobody is claiming that FPS/watt is the One True Benchmark that all gamers should base their decisions on. But these #s are very pleasantly surprising and IMO this is the big story of the launch as far as gaming is concerned.

So yeah, believe it or not, people might care about more than one thing simultaneously. Perf/watt, perf/$, total cost, total power consumption, raw performance, those are all things that would factor into any reasonable gamer’s CPU choice.
 
Last edited:

marquimvfs

Member
No clue if u can even disable the e-cores, i assume u can.
Yes, you can. There's an option in bios (Legacy Game Compatibility Mode) just to do that. It's being disclosed as Scroll Lock Workaround. You activate the option in bios and system will keep the e cores disabled while you keep the Scroll Lock activated.
 
Last edited:

Kenpachii

Gold Member
Here are the graphs from computerbase.de

As per usual, it depends on the game and what kind of workload they have for the CPU. Sometimes AMD is more efficient and sometimes Intel.


Interesting from his chart is that the moment RT kicks in wattage goes up. Because RT eats cpu cycles. I really wish we had people that test these chips and know what they are testing. because atm it just feels like they run through a random script of area's that really don't push those cores even when they pick the right game.

Wish they tested with
3090 + 12900k
Ascent + max RT 1080p ultra settings
Anno 1800 ( full build map ) 1080p max settings
they are billions 1080p max settings ( full map build )
Riftbreaker + RT 1080p ultra settings ( full map build )
Cyberpunk market, 1080p max settings + max RT.
Control ( hallway fight ) 1080p max settings + max RT.
Two Point Hospital ( max settings 1080p )

Just a few examples. All of them bottleneck my 3080 on a 9900k at 5.0ghz with RT on big time, this will showcase really how fast those cores are in reality and what to expect.

Sadly no dice.
 
Last edited:

mrmeh

Member
Looks like a great chip from intel(finally), the 720p results are nonsense though, if you spend that much on a cpu and your going to have a high end gpu and will be gaming at 2k - 4k. At 4k most games are GPU limited and the difference in frame rate will be small.

Come on AMD keep pushing, lets have better chips and value.
 

TheAssist

Member
Interesting from his chart is that the moment RT kicks in wattage goes up. Because RT eats cpu cycles. I really wish we had people that test these chips and know what they are testing. because atm it just feels like they run through a random script of area's that really don't push those cores even when they pick the right game.

Wish they tested with
3090 + 12900k
Ascent + max RT 1080p ultra settings
Anno 1800 ( full build map ) 1080p max settings
they are billions 1080p max settings ( full map build )
Riftbreaker + RT 1080p ultra settings ( full map build )
Cyberpunk market, 1080p max settings + max RT.
Control ( hallway fight ) 1080p max settings + max RT.
Two Point Hospital ( max settings 1080p )

Just a few examples. All of them bottleneck my 3080 on a 9900k at 5.0ghz with RT on big time, this will showcase really how fast those cores are in reality and what to expect.

Sadly no dice.
I mean sure, but all these articles probably had razer sharp deadlines and you need to to the tests that are the most interesting for the widest audience. I think these guys did a ton of work and god knows how many benchmarks.

They also have a good community who do these kinda tests if you ask them.

I wouldnt expect anything to crazy right around launch, give the good journalists and tech enthusiasts some time and these benchmark will come :)
 
Top Bottom