Skifi28
Member
For starters, it has nothing to do with the actual thread. Just people finding little excuses to bring the console warrior out.because you don't accept reality or what ?
Last edited:
For starters, it has nothing to do with the actual thread. Just people finding little excuses to bring the console warrior out.because you don't accept reality or what ?
Confirms what?but it confirms the AMD statement.
You're twisting your words just like Microsoft. Sometimes, i suspect that you work for Microsoft Xbox division.The features AMD talk about are not DX12U exclusive, AMD says the Series consoles have the AMD versions of hardware support for those features, those features will be used in the PC space which doesn't just use DX12U.
Confirms what?
You're twisting your words just like Microsoft. Sometimes, i suspect that you work for Microsoft Xbox division.
What other features AMD talks about which is not DX12U features? DX12U is essential for Microsoft games.
Edit: think I'm done here, I'm not going to derail the thread anymore.
Those are still DX12U features. My gosh.It confirms the hardware support for the features AMD said, we see Tier 2 VRS in Xbox games, we have the Metro Dev confirming it so we can presume the same is true for SFS and Mesh Shaders as per the AMD statement.
VRS was used in Modern Warfare in 2019, so no. The tweets above say Metro used it on PS4.Those are still DX12U features. My gosh.
VRS 2.0 is still VRS. VRS is DX12U
features.
It was a software implementation.VRS was used in Modern Warfare in 2019, so no. The tweets above say Metro used it on PS4.
Wishing they had also compared to running native at whatever the base resolution is for the different FSR modes. Would have been good to compare that visually and see what performance cost was associated with FSR.
So the PS5 is RDNA 1 or RDNA 1.5 and is keeping up with the Xbox RDNA 2? Or both are RDNA 2 like AMD says? Honestly if games runs well and they make awesome or good games i don´t care what it is.
Ah yes, @JettSeriesX here explaining how PS5 doesn’t have the exact same dedicated RT cores (Ray Accelerators) in it’s GPU as the PC RDNA 2 cards. Very reliable indeed…
Why are we talking here about some RDNA shit? I am glad that I made legit thread yesterday instead of whatever is this...
He has no idea about the inner workings of the PS5. This dude has no authority to make such presumptuous claims at all. His fucking Twitter account name is JettSeriesX - he's just another fanboy trying to spread FUD.
And PS5 doesn't have dedicated RT cores? Well, neither does the Series X because RDNA 2 doesn't use RT Cores.
VRS was already confirmed to be inferior to the Geometry Engine by the PS5's former principal software engineer:
And anyone making definitive statements about the PS5's mesh shaders or lack thereof without being an actual source of authority for the PS5 are just trying to spread FUD.
And what was your relevant discussion about? Outside sponsor Xbox console? You don't have a single of clue of what you are talking about and quite ignorant about any tech side aspect. You just parroting some PR sentence take here and there, nothing more. You sell VRS hardware as a revolutionary feature when it's just a low precision effect to save perfomance. You talk about machine learning when nothing on series X hardware is about ML, it just has more CU to help it but not true hardware resources build to it.For someone who claims to have me on ignore you quote a lot of my posts
No actual relevant discussion from you as usual.
A hint on how AMD's MCM GPUs might work:
Oh boy, the green meltdowns will eclipse what happened when 6900XT hit.
V-Cache is AMD's first implementation of TSMC's 3DFabric technology, allowing them to stack 64MB of L3 cache directly on top of a Zen 3 CCD, effectively tripling the available L3 cache. This means a whopping 192MB of cache for Ryzen 9 CPUs (64MB L3 base + 2 CCDs), and presumably up to 96MB for the rest of their lineup (though no CPUs have officially been announced; only a 5900X prototype was shown). AMD is reporting an average 15% FPS boost compared to a regular 5900X (both locked to 4 GHz), including a 25% average boost in Monster Hunter World. Article on V-Cache here.
Oh, you sweet summer child...a faster Ryzen CPU
All fanboys from both sides do that, parroting some nonsense that they read somewhere. XBOX is using hardware ML to do auto HDR on old games that do not have HDR.And what was your relevant discussion about? Outside sponsor Xbox console? You don't have a single of clue of what you are talking about and quite ignorant about any tech side aspect. You just parroting some PR sentence take here and there, nothing more. You sell VRS hardware as a revolutionary feature when it's just a low precision effect to save perfomance. You talk about machine learning when nothing on series X hardware is about ML, it just has more CU to help it but not true hardware resources build to it.
And as always laughing gif. So predictable and idiotic. Unbelievable how embarrass yourself everytime you post something, it's practically impossible to you have any kinda of intelligent discussion because you perfectly know to have a limited knowledge and the only thing you are able to do it's instigate.
Epic's comparison:
Native 4k:
FSR from 1080p:
That is actually impressive. Source?
That other place.Source?
Oh boy...DLSS 2 where details are brought out
Yes, and misunderstanding the PC requirements of the development tool to run the demo. Thinking it was the requirements for the demo binary. Famous Neogaf members are quilty of that...!So unlike the gaming community to attempt to draw conclusions from a 4 minute reveal of brand new software that no one has seen or used in the wild.
Resolution: From -> to | Comparison link | Performance |
---|---|---|
720p ->1440p | TAA vs TSR | 81 FPS vs 79 FPS |
720p ->1440p | Native 1440p vs TSR | 44 FPS vs 79 FPS |
1080p ->1440p | TAA vs TSR | 61 FPS vs 58 FPS |
1080p ->1440p | Native 1440p vs TSR | 44 FPS vs 58 FPS |
2880p -> 1440p (downscale) | Native 1440p vs 2880p | 44 FPS vs 14 FPS |
Epic's comparison:
Native 4k:
FSR from 1080p:
Anti-lag exists for years on Nvidia cards. It was under "number of pre rendered frames". Just some time ago they changed the name to "low latency".1) Anti-lag (copypastad by NV)
If i recall doesnt TSR use motion vectors? Very promising if so.So i found the source, it isnt FSR, but TSR, Unreal engine 5's Temporal Super Resolution. Works completely different from FSR and on paper sounds pretty great to me.
Resolution: From -> to Comparison link Performance 720p ->1440p TAA vs TSR 81 FPS vs 79 FPS 720p ->1440p Native 1440p vs TSR 44 FPS vs 79 FPS 1080p ->1440p TAA vs TSR 61 FPS vs 58 FPS 1080p ->1440p Native 1440p vs TSR 44 FPS vs 58 FPS 2880p -> 1440p (downscale) Native 1440p vs 2880p 44 FPS vs 14 FPS
Upsides:
Very small performance hit
No exotic hardware requirements (works even with Vega)
Excellent temporal stability and no flickering on faraway objects with complex geometry
Looks considerably better than TAA, particularly on the edges of faraway objects. 720p TSR sometimes even beats 1080p TAA (definitely so in motion)
Negatives:
Still bugs and artifacts on moving objects/characters
Nanite can reduce geometry detail (up to 4x when doing 50% upscaling), since it strives to show about 1 polygon per pixel and doesn’t account for upscaling. It’s similar to the bugs DigitalFoundry has mentioned with LODs.
Unreal Engine 5 Temporal Super Resolution Tested
The latest version of Epic Games' renowned game engine, Unreal Engine 5, features a new upsampling algorithm called Temporal Super Resolution that some believe could strongly compete with NVIDIA DLSS and AMD's upcoming Gaming Super Resolution technology.www.thefpsreview.com
NV's answer to Radeon Anti-Lag (doing exactly the same) is "Reflex", but gud story bruh.Anti-lag exists for years on Nvidia cards. It was under "number of pre rendered frames". Just some time ago they change the name to "low latency".
I have misread "this is Epic's comparison" sorry (context was misleading).So i found the source, it isnt FSR, but TSR, Unreal engine 5's Temporal Super Resolution. Works completely different from FSR and on paper sounds pretty great to me.
so, its 2 anti lag options from Nvidia vs 1 from AMD? If yes, Nvidia is still winning...NV's answer to Radeon Anti-Lag (doing exactly the same) is "Reflex", but gud story bruh.
It seems so, yes.so, its 2 anti lag options from Nvidia
I also like these pictures from same source:That other place.
Oh boy...
From Death Stranding, from images hyped into oblivion, bush over dude's head:
This is after DLSS QUALITY (so, 1440p > 4k upscale, Epic's example is 1080p > 4k)
people are mistaking improved lines (NV made the best TAA derivative we have, lines are great) for something else.
So DLSS 2.0
That other place.
Oh boy...
From Death Stranding, from images hyped into oblivion, bush over dude's head:
This is after DLSS QUALITY (so, 1440p > 4k upscale, Epic's example is 1080p > 4k)
people are mistaking improved lines (NV made the best TAA derivative we have, lines are great) for something else.