• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD: FSR 2.0 will run on Xbox and these Nvidia graphics cards

ArtHands

Thinks buying more servers can fix a bad patch
It's that Nvidia comes out with a feature that won't work on my Nvidia video card, then AMD is the one that brings it to me in a form I can use.

NIS (Nvidia Image Scaler) works on your Nvidia video card. Its the equivalent to FSR.
 

M1chl

Currently Gif and Meme Champion
This is wildly incorrect. Sony gets semi-custom parts from AMD the same exact way Microsoft does. Both are customers of AMD's semi-custom business where they can pick and choose any feature or roadmap feature and then integrate their very own IP into the mix.

And of course it doesn't mean PS5 won't support FSR 2.0. Of course it will, but what you just said is inaccurate. Microsoft is also licensing AMD tech and modifying a great deal of it. This is why Series X|S have machine learning built in and PC RDNA 2 does not. It's also why Series X GPU has custom texture filters built into the GPU for Sampler Feedback Streaming that's not present in PC RDNA 2. Not just that, Series X GPU goes beyond the RDNA 2 specs for Tier 2 VRS, as well as having larger group size support for Mesh Shaders beyond what the PC hardware has.

The spec max for Mesh Shader group size is 128. Series X can go up to 256. AMD also confirmed with RDNA 2 on their official youtube page that the group size support for Mesh Shaders is still below what Series X possesses.



FDBNDz2XsAAPiJ1
I see that you refute none of what I said, besides I have access to SDK on both platforms, so... MS said to AMD "hey could you do this for us" vs Sony "hey can we borrow this and do rest ourselves". Wildly inaccurate my ass. Also what the fuck does Nvidia Turing has to do with Series X?
 

Riky

$MSFT
This is wildly incorrect. Sony gets semi-custom parts from AMD the same exact way Microsoft does. Both are customers of AMD's semi-custom business where they can pick and choose any feature or roadmap feature and then integrate their very own IP into the mix.

And of course it doesn't mean PS5 won't support FSR 2.0. Of course it will, but what you just said is inaccurate. Microsoft is also licensing AMD tech and modifying a great deal of it. This is why Series X|S have machine learning built in and PC RDNA 2 does not. It's also why Series X GPU has custom texture filters built into the GPU for Sampler Feedback Streaming that's not present in PC RDNA 2. Not just that, Series X GPU goes beyond the RDNA 2 specs for Tier 2 VRS, as well as having larger group size support for Mesh Shaders beyond what the PC hardware has.

The spec max for Mesh Shader group size is 128. Series X can go up to 256. AMD also confirmed with RDNA 2 on their official youtube page that the group size support for Mesh Shaders is still below what Series X possesses.



FDBNDz2XsAAPiJ1

That's really interesting about Mesh Shaders, adding that info to ML and SFS customisations makes Series X far more forward looking than people first thought.
 

Topher

Gold Member
it's sad how third parties won't push the series x to it's limit because PS is the more popular console atm so they will just do the bare minimum, I just hope the first party devs can go all out on it

Really? Cuz third parties had no problem pushing Xbox One X over PS4 Pro despite PS4, in general, being the "more popular console". If you bought XSX because you thought it was going to wipe the floor with PS5 then that's just not what happened. Good thing though is that you bought a damn fine console in XSX. I suggest being happy with that.
 

rnlval

Member
This is wildly incorrect. Sony gets semi-custom parts from AMD the same exact way Microsoft does. Both are customers of AMD's semi-custom business where they can pick and choose any feature or roadmap feature and then integrate their very own IP into the mix.

And of course it doesn't mean PS5 won't support FSR 2.0. Of course it will, but what you just said is inaccurate. Microsoft is also licensing AMD tech and modifying a great deal of it. This is why Series X|S have machine learning built in and PC RDNA 2 does not. It's also why Series X GPU has custom texture filters built into the GPU for Sampler Feedback Streaming that's not present in PC RDNA 2. Not just that, Series X GPU goes beyond the RDNA 2 specs for Tier 2 VRS, as well as having larger group size support for Mesh Shaders beyond what the PC hardware has.

The spec max for Mesh Shader group size is 128. Series X can go up to 256. AMD also confirmed with RDNA 2 on their official youtube page that the group size support for Mesh Shaders is still below what Series X possesses.
PC RDNA 2 hardware has support for the inline tensor math feature with the compute unit.

ZbGOsY7.jpg


From https://wccftech.com/amd-microsoft-...ctml-to-life-4x-improvement-with-rdna-2-gpus/
AMD & Microsoft Collaborate To Bring TensorFlow-DirectML To Life, Up To 4.4x Improvement on RDNA 2 GPUs.

This was tested on AMD Radeon RX 6900 XT and RX 6600 XT graphics hardware. The largest improvement was seen on device training scores, showing an increase of 4.4x improvement.

It's in AMD's interest to spread machine learning features across RDNA 2 SKUs.


Moving Gears to Tier 2 Variable Rate Shading

Chris Wallis, Senior Software Engineer at The Coalition

Tier 2 VRS allowed Gears 5/Tactics to see an up to 14% boost in GPU perf with no perceptible impact to visual quality. It is available on all hardware supporting DirectX 12 Ultimate, including Xbox Series X|S, AMD Radeon™ RX 6000 Series graphics cards, and NVIDIA GeForce RTX 20 Series and 30 Series GPUs..
 
Last edited:

Riky

$MSFT
Once cross gen period ends all third party and Xbox FP games will be limited by Series S (one way or another...).

Until the minimum spec for the PC versions exceeds Series S then not really on the same GDK, plus Steam Deck is now a thing.
 

GreatnessRD

Member
It's that Nvidia comes out with a feature that won't work on my Nvidia video card, then AMD is the one that brings it to me in a form I can use.
This is a part that a lot of people (Fanboys in particular) miss. If FSR 2.0 brings the heat and is useable on all brands like they claim?! That's gonna be crazy. Nvidia gonna have to call a mean audible, imo.
 

Rudius

Member
Quality mode on a 6800XT takes less than 1.1ms @4K.
The FSR 2.0 overhead difference between quality modes is seemingly negligible. What changes is mostly performance and output IQ.


https://cdn.videocardz.com/1/2022/03/AMD-FSR2-GPU1.jpg





All you need to do is take Deathloop's performance and add the overhead.

For example, to get FSR 2.0 Quality mode with 4K output the game will render at 1440p natively.





Then you just get a Deathloop benchmark for 1440p:

performance-2560-1440.png



And add 1.1ms to that framerate.


For example on a RX 6800 you get 97.7 FPS, that's 1/97.7 = 10.2ms per frame.
With FSR 2.0 Quality it takes 1.1ms more, so its 11.3ms.
1/0.0113 = 88.5 FPS for a 4K output with better than native IQ.
Don't forget that FSR 2.0 replaces the TAA the game is using normally, which should be costing some milliseconds already.
 

Bojji

Member
Until the minimum spec for the PC versions exceeds Series S then not really on the same GDK, plus Steam Deck is now a thing.

Series S will be lowest common denominator. Third party won't care about steam deck just like they don't care about Switch.

Put this thing on ps5 aswell, cmon Lisa.

It will work on any modern GPU. Just has to be implemented in games.

I think this is great news console games, reconstructed 1080p will look much better than what dying light 2 or GotG had to offer.
 

Riky

$MSFT
Series S will be lowest common denominator. Third party won't care about steam deck just like they don't care about Switch.
You a developer or publisher?

All the PC games I've played recently the spec is well below Series S, we're already into the second year of this gen and that hasn't changed.
Games will scale the same way as they always have, no need for concern.
 

Bojji

Member
You a developer or publisher?

All the PC games I've played recently the spec is well below Series S, we're already into the second year of this gen and that hasn't changed.
Games will scale the same way as they always have, no need for concern.

Most games right now are cross gen so X1\PS4 level of performance is the floor. This will change of course. Every hardware below XSS level of performance will get down ports if publisher decides for it.
 

Thirty7ven

Banned
This is wildly incorrect. Sony gets semi-custom parts from AMD the same exact way Microsoft does. Both are customers of AMD's semi-custom business where they can pick and choose any feature or roadmap feature and then integrate their very own IP into the mix.

And of course it doesn't mean PS5 won't support FSR 2.0. Of course it will, but what you just said is inaccurate. Microsoft is also licensing AMD tech and modifying a great deal of it. This is why Series X|S have machine learning built in and PC RDNA 2 does not. It's also why Series X GPU has custom texture filters built into the GPU for Sampler Feedback Streaming that's not present in PC RDNA 2. Not just that, Series X GPU goes beyond the RDNA 2 specs for Tier 2 VRS, as well as having larger group size support for Mesh Shaders beyond what the PC hardware has.

The spec max for Mesh Shader group size is 128. Series X can go up to 256. AMD also confirmed with RDNA 2 on their official youtube page that the group size support for Mesh Shaders is still below what Series X possesses.

You're going to need starting posting actual sources because it's getting tiresome.

The only thing MS added was the HW texture filters. The rest is all standard DX12U RDNA2 parity. This was said by an MS engineer on Twitter. You're whole "MS added INT" not on RDNA2 is a bunch of bullshit.

Regarding mesh shaders, source? Because what I see from AMD site is not that the spec max is 128, it's the ideal count for primitive and vertex, "between 64 and 128".
 
Last edited:

Riky

$MSFT
"
With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario.

like with SFS, Series consoles are customized beyond standard RDNA2.
 
"
With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario.

like with SFS, Series consoles are customized beyond standard RDNA2.

So we have special hardware in series consoles for FSR? Just like DLSS?
 

Thirty7ven

Banned
"
With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario.

like with SFS, Series consoles are customized beyond standard RDNA2.

RDNA2 vs RDNA1.

"We"

Screenshot_2020-11-18-RDNA-2-questions-areejs12-hardwaretimes-com-Hardware-Times-Mail.png
 
Last edited:

Riky

$MSFT
It's quite clear, the RDNA2 shaders, no mention of previous generations at all.
"We" as in the Xbox team, it's an interviews on Series hardware.
You're really clutching at straws now.

David Cage confirmed it.

"The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage"
 
Last edited:

DJ12

Member
So we have special hardware in series consoles for FSR? Just like DLSS?
No as it works on bog standard AMD cards that don't have any of MS's customisations.

It's coming to all devices and APIs soon enough.

Its just on XBOX as AMD have done the Direct X implementation, which by default includes xbox.
 

FireFly

Member
FSR 2.0 has been confirmed not to use any machine learning at all, so debates about whether a given console has the A.I "secret sauce" have no place in this thread.
 
Last edited:

Laptop1991

Member
Good news if it's anything like dlss 2, i've just used it in Cyberpunk 2077, everything up full with all RT on apart from blur and dof and high fps with no drops, really smooth, great tech although there needs to be more games that use it, also Amd and Nvidia don't usually play well together, they end up competing, which doesn't help PC games.
 
We are clearly seeing the beginning stages of where Machine Learning Super Resolution will come from for Series X. It wasn't going to be that spatial solution AMD had used for FSR 1.0, but this very good apparently temporal solution. If it will even be necessary in the first place. Look forward to seeing more.
 
I see that you refute none of what I said, besides I have access to SDK on both platforms, so... MS said to AMD "hey could you do this for us" vs Sony "hey can we borrow this and do rest ourselves". Wildly inaccurate my ass. Also what the fuck does Nvidia Turing has to do with Series X?

Ahhh, so you're just making stuff up and trolling I see. Microsoft and Sony's dealings with AMD are identical. Sony ain't just borrowing bits and pieces and doing all the rest themselves with no help from AMD. Are you sick in the head? They are both customers of AMD's semi-custom business and AMD has reiterated how this arrangement works over and over. Both companies take and choose what IP they want from AMD's product roadmap, and then they integrate their own special additions, tweaks or personal IP to it, but this is done in partnership with AMD.

AMD built Sony's SoC just as they built Microsoft's and did validation testing and all that stuff. Sony did not make AMD's IP themselves, Microsoft did not make AMD's IP themselves. They work together and collaborate on ideas, and then those ideas can become part of future or current AMD products, but this relationship and the ability to customize AMD's IP isn't unique to either Sony or Microsoft. It's the same on both ends.

If you believe Sony went ahead and was just like "okay, give us this" and then went and built AMD's RDNA 2 based GPU, Zen 2 CPU and the whole 2 SoC entirely on their own with no assistance or support from AMD at all then you my friend are way too far gone to take seriously. Microsoft built and made many of their own drivers for Series X, just as Sony has done on their end. Even the firmware for the SSD in Series X is custom created by Microsoft.

Microsoft quite literally notifies the development community of driver updates to the Series X GPU. I don't know where you're getting this stupid idea from (it's old console wars thinking) that everything Microsoft does is off the shelf and lacking in innovation, and sony are taking a few scraps or pieces of their own and other people's bootstraps from here and there and magically creating entirely new CPU and GPU designs matching closely to AMD IP themselves. Do you actually believe this garbage?

And turing was used as example to show that Series X has Mesh Shader capability or specs beyond other mesh shading capable cards. I showed evidence for it with Turing, but I guess I shouldn't have expected you to take my word for it on PC RDNA 2 also. So here's that proof for you here. PC RDNA 2 also has the same spec max for mesh shader as Turing, though at the time of those tests a group size of 32 was the sweet spot for Turing. Series X supports Mesh Shader group sizes beyond even PC RDNA 2 cards such as RX 6800XT. Series X goes up to 256 max thread or group size, and it has been tested to deliver better performance than smaller group sizes. The Max on PC RDNA 2 for mesh shaders is 128.

So I've provided some receipts to back up what I'm saying. Where is the evidence to support the crap you're saying? There's even a video straight from AMD's presentation to back up what I said about the PC spec.

N18bStV.jpg


 

Thirty7ven

Banned
Why can’t you just I don’t know, uh, learn how to read? Fucking hell man, you gotta be on the payroll at this point. He’s talking feature sets in the api.
 
Last edited:

marquimvfs

Member
Ahhh, so you're just making stuff up and trolling I see. Microsoft and Sony's dealings with AMD are identical. Sony ain't just borrowing bits and pieces and doing all the rest themselves with no help from AMD. Are you sick in the head? They are both customers of AMD's semi-custom business and AMD has reiterated how this arrangement works over and over. Both companies take and choose what IP they want from AMD's product roadmap, and then they integrate their own special additions, tweaks or personal IP to it, but this is done in partnership with AMD.

AMD built Sony's SoC just as they built Microsoft's and did validation testing and all that stuff. Sony did not make AMD's IP themselves, Microsoft did not make AMD's IP themselves. They work together and collaborate on ideas, and then those ideas can become part of future or current AMD products, but this relationship and the ability to customize AMD's IP isn't unique to either Sony or Microsoft. It's the same on both ends.

If you believe Sony went ahead and was just like "okay, give us this" and then went and built AMD's RDNA 2 based GPU, Zen 2 CPU and the whole 2 SoC entirely on their own with no assistance or support from AMD at all then you my friend are way too far gone to take seriously. Microsoft built and made many of their own drivers for Series X, just as Sony has done on their end. Even the firmware for the SSD in Series X is custom created by Microsoft.

Microsoft quite literally notifies the development community of driver updates to the Series X GPU. I don't know where you're getting this stupid idea from (it's old console wars thinking) that everything Microsoft does is off the shelf and lacking in innovation, and sony are taking a few scraps or pieces of their own and other people's bootstraps from here and there and magically creating entirely new CPU and GPU designs matching closely to AMD IP themselves. Do you actually believe this garbage?

And turing was used as example to show that Series X has Mesh Shader capability or specs beyond other mesh shading capable cards. I showed evidence for it with Turing, but I guess I shouldn't have expected you to take my word for it on PC RDNA 2 also. So here's that proof for you here. PC RDNA 2 also has the same spec max for mesh shader as Turing, though at the time of those tests a group size of 32 was the sweet spot for Turing. Series X supports Mesh Shader group sizes beyond even PC RDNA 2 cards such as RX 6800XT. Series X goes up to 256 max thread or group size, and it has been tested to deliver better performance than smaller group sizes. The Max on PC RDNA 2 for mesh shaders is 128.

So I've provided some receipts to back up what I'm saying. Where is the evidence to support the crap you're saying? There's even a video straight from AMD's presentation to back up what I said about the PC spec.

N18bStV.jpg



Holy shit, boy. You're talking about silicon, M1chl M1chl is talking about software (libraries, drivers and etc). What you're saying has nothing to do with the matter.
 
Last edited:
Top Bottom