I have my prediction, for a pretty long time anyway.Let's wait until we try it for ourselves first, or at least there are decent comparisons out there.
Yeah lets see who is going to have last laugh, boi : D
Is going to have last laugh who?who is going to have last laugh
Time stamped
Screenshoted
I don't drugs...not anymore.Is going to have last laugh who?
Dude, seriously, what the heck is with "DOA" let alone "another one", are you on crack or baiting?
Whole past GPU series like 5700XT with current day back then features, which nVidia had. New cards are somewhat decent, but then last great was something like 7900XT I believe. Nobody cares if you stuck back in time, like for example intel.It's DLSS 1.0, AMD doesnt have tensor cores so they cant utilize it like Nvidia is doing with Ampere cards to make DLSS 2.0 look so good. Pretty sure RDNA 3.0 cards will have something equivalent to tensor cores to do the heavy lifting.
BTW, what was their first DoA? ray tracing? it's really not that common at the moment and consoles seem to be doing just fine with it. Look at the UE5 results. It's doing better than Nvidia cards. AMD cards with low level API support often outdo Nvidia cards that are priced much higher.
I think AMD has been doing just fine. They are both selling out. I'd say its like MS vs Sony. PS5s might be selling more but it doesnt mean xbox consoles arent selling out. they are both widely successful and hardly DoA.
But again, how many ray tracing games were out back in 2019 when 5700xt came out? Metro, Battlefield v and what else? Control came out a few months later. And then nothing until the 6000 series launched. DLSS 1.0 was shit as we can now see from this AMD solution which is pretty much the same thing. So it was three ray traced games vs dozens other that offered equivalent or better performance for a better price. Especially for games that support the vulkan APIs.Whole past GPU series like 5700XT with current day back then features, which nVidia had. New cards are somewhat decent, but then last great was something like 7900XT I believe. Nobody cares if you stuck back in time, like for example intel.
Why would AMD shoot themselves in the foot and release "shady" pictures of it in action?lets wait until real people get their hands on it. some of these images look shady to me.
It's from AMD's 4k video on their YouTube channel...lets wait until real people get their hands on it. some of these images look shady to me.
its obviously blown up of a smaller youtube video image. even the native side is blurry as hell.Why would AMD shoot themselves in the foot and release "shady" pictures of it in action?
i think it starts at 3:40 on his video
I think they are releasing Fidelity FX for developers to use so now, so it will automatically look awesome for RDNA 3. I think its a stepping stone for RDNA 3.
I didn't follow this development, so for me it was a slight shock...Imagine thinking just rendering at a lower resolution and scaling it up using a basic upscaler algorithm and applying a generic Reshade-class sharpening filter would be as good as DLSS. This is even worse than checkerboard rendering or some other variant of TAA.
To the shock of literally no one, Nvidia wins again.
Look at the UE5 results. It's doing better than Nvidia cards. AMD cards with low level API support often outdo Nvidia cards that are priced much higher.
I like my threads, thoWe kinda have a thread
AMD FidelityFX Super Resolution (FSR) launches June 22nd
AMD demonstrated its super-resolution technology in Godfall, a game that already heavily depends on AMD technology. The FSR will officially be supported by all Radeon RX graphics cards, including RX Vega, RX 500, RX 5000 and RX 6000 series. In fact, AMD will enable FSR support even on Ryzen...www.neogaf.com
Yeah, the $1,500 35 tflops Ampere card is outperforming a $1,000 23 tflops AMD card at 4k.Your own post with youtube videos show ampere faster than rdna 2 in unreal 5, at 4k.
It's kinda sad how he's still hyping it up after we all saw how bad it looks. I guess it should be expected coming from him since he's been downplaying DLSS and hyping up AMD solution for a long time.Last person to believe.
Why are you bringing up Ampere Tflops and comparing them to RDNA2 when we all know that they don't scale 1:1? There's been numerous videos and articles done debunking this teraflop comparison between different architectures.Yeah, the $1,500 35 tflops Ampere card is outperforming a $1,000 23 tflops AMD card at 4k.
But at 1440p ALL Ampere Nvidia cards are underperforming.
The $579 6800 is a 17 tflops gpu is almost on par with the $700 MSRP 3080 a 30 tflops GPU.
Because Nvidia is marketing them and using them to over price their cards?Why are you bringing up Ampere Tflops and comparing them to RDNA2 when we all know that they don't scale 1:1? There's been numerous videos and articles done debunking this teraflop comparison between different architectures.
Yeah, the $1,500 35 tflops Ampere card is outperforming a $1,000 23 tflops AMD card at 4k.
But at 1440p ALL Ampere Nvidia cards are underperforming.
The $579 6800 is a 17 tflops gpu is almost on par with the $700 MSRP 3080 a 30 tflops GPU.
These companies false advertise all the time, but I'm not sure it's a good idea to use their false propaganda as some form of comparison. Some people here don't understand that teraflops between these architectures don't scale 1:1 and therefore could be misled.Because Nvidia is marketing them and using them to over price their cards?
The fact of the matter is that the UE5 demo is running better on AMD cards that are priced lower. Does this mean they run every game better? No. But AMD is no DoA by any means and is doing very well to keep up with Nvidia.
We will have to agree to disagree. This demo is pretty much our first taste of next gen engines and if its struggling to do native 1440p 60 fps then clearly there is a bottleneck here and that bottleneck is the GPU horsepower.These cards are only flexed at 4k, where other bottlenecks are eliminated. Its of little importance what they do at 1080p or 1440p
There are two other comparisons in my post. Pick which ever one you think fits more. The price point, which gives AMD the edge. Or the actual GPU benchmarks which also give AMD the edge.These companies false advertise all the time, but I'm not sure it's a good idea to use their false propaganda as some form of comparison. Some people here don't understand that teraflops between these architectures don't scale 1:1 and therefore could be misled.
It would be pretty bad if UE5 was CPU bottlenecked at 60 FPS displaying a largely static scene.These cards are only flexed at 4k, where other bottlenecks are eliminated. Its of little importance what they do at 1080p or 1440p
This is how I image some of you here. lol
It would be pretty bad if UE5 was CPU bottlenecked at 60 FPS displaying a largely static scene.
You do know that site doesnt actually test what it shows, right ? They most likely use an algoritm to aproximate results. I would sugest to not use that site for benchmarks, as its not accurate.
You have actual videos with the unreal engine running on both 3090 and a 6900xt. Why look at something else ? You can say it this way, that a 1500 dollar card outperforms a 1000 dollar one. But looking at those results, a 700 dollar 3080 would be on par with the 1000 dollar 6900xt.
These cards are only flexed at 4k, where other bottlenecks are eliminated. Its of little importance what they do at 1080p or 1440p
Well, if you are limited by something else other than the GPU, then you shouldn't see GPU scaling, which we do see in those benchmarks at least. The reason for excluding 1080p in a GPU review is if the cards don't scale at all this resolution, so the benchmark isn't telling you anything useful. But suddenly enable ray tracing and that can change completely. It's hardly useless to test Cyberpunk 2077 at 1080p with all ray tracing effects on for example, since we see huge differences at that resolution between a 3080/3070/3060 etc. Well, Lumen is effectively a software form of ray tracing that is extremely heavy, and from what Epic have said, it is the main limiter on the framerate. (Think about how they are targeting 1080p on the 2070/2080 class hardware in the consoles).Its not one thing, its probably all the components when running at low resolutions. Instead of relying fully on the gpu, you're tripping on all the rest of the system on the way. There are reviews for these cards that dont even include 1080p anymore.