Try to use it with game that supports VRS. Oh wait, you can't.
Why don't we get real and factor in "I run it at 1440p and claim it's 4k" and "I run at at 1080p and claim it's 1440p" openly, as real men?
FUD is so strong within you, Kenpachi. I can see glow around your posts.
On serious notes, Igor Labs also confirmed HUBs observations (they used (different route though, disabling cores).
At this point, nv drivers eating more of your CPU could be regarded as an established fact.
Dude, this is implictly claiming vdieocardz is involved in some sort of anti-nvidia conspiracy.
Tell that to AMD why are they even busy making a DLSS alternative? just lower the resolution guys!.
And about driver overhead.
Remember what i said in the last topic exactly that was specifically about it?
here i will quote a small part of it.
U need to bench a metric ton of games, at all kinds of different settings, with lots of gpu's and different gpu architectures and different cpu architectures with different cores ( cpu's ) etc etc, with different drivers of every card and different patches of the games themselves that u are benching at different settings with drivers itself and different hardware setups like memory / motherboards. and base your conclusions on it.
I would add towards it, also testing other api's would be handy to give you a better view on what AMD vs Nvidia is doing.
So lets look at your igor labs test,
Uniform test platform
In order to be able to exclude all possible influences by different motherboards, CPUs, memory modules and operating system installations, I created all benchmarks with an exemplary DirectX12 game on one and the same platform, which scales from 2 up to 8 cores (SMT on each) still cleanly over 4 to 16 threads. The game uses two graphics cards from NVIDIA and AMD that are roughly equally fast at WQHD resolution, as well as a Ryzen 7 5800X that I’ve gradually reduced to 2 cores / 4 threads to create the CPU bottleneck. Current drivers are installed and the game has been fully patched. The screen resolution ranges from 720p, 1080p and 1440p to 2160p.
So he used 1 motherboard, 1 type of memory, , uses 1 bios, uses 1 cpu, 1 cpu architecture, hell one cpu lol, 1 gpu architecture of nvidia and amd, and 1 card, uses 1 driver and uses 1 type of windows, with 1 updated windows revision.
Its even worse then what hardware unboxed did. Look hardware unboxed is biased to AMD which was obviously clear with his demonstration, but hey atleast he did a little bit of effort even while it was useless to say the least, this guy just shat the brick entirely.
Then on top of it he tested it on a single game that is known to be riddled with bugs and performance problems on PC, specifically nvidia and intel. That was so bad it got universally slammed by any outlet to the point they had to redesign a lot of it. The port is even worse when u realize it uses the same engine as death stranding which runs like a dream on PC. And with the newest updates just released with almost sometimes double the performance gains on nvidia GPU's its pretty fucking clear the game is a mess.
Look they could be perfectly right about overhead and honestly i wouldn't be shocked if its there because dx12 simple isn't favorable for nvidia? and with some good propper testing u could easily showcase this if that's the case. but those hillbilly tests that proof nothing other then there narrative isn't going to proof anything.
And about my bias. I have no bias towards any corporation, i see how it is and frankly sugarcoating isn't in mine vocabulary which triggers people like you hard it seems like. If a card is shit i will tell you how its shit and why its shit. And this 6700 card that costs half a grand that is mid range tier is laughable shit and any outlet would slam the card for it. Specially when it can't compete with nvidia when raytracing started to become a thing.
The problem however with you is, u pick whatever obsecure or biased outlet that fits your bill and ignore every single feature and function until it favors AMD. That's why i stated AMD users caring suddenly about overhead in a API like dx12 is just laughable ( which is just one api ). As AMD has been heavily hit with API overhead for the last decade and nobody seemed to care that support that company or report on it or even cheer for it because they all covered it up the same with hardware unboxed with the same idiot tests i already slammed these outlets for a decade with. Yet now they suddenly care. Which makes it even more hypocritical from them.
Anyway DX12 is under heavy development anyway, unlike dx11. With rtx i/o and that directstorage microsoft is working at we will see massive changes anyway in this year or the next when PS5 games start to hit to the point we will see changes drastically on GPU department from nvidia. This is why the 3000 series feels kinda like what the 500 series was, a last gen gpu on steroids. AMD is more ready for this with there current GPU's if they get there software under control which has been a thing with amd for the last decade now. so nobody gets there hopes up on that one anymore and even the last remaining die hards probably all moved over to nvidia at this point.