• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Mr Moose

Member
Many here argue that the XSS is 'weaker' than the X1X. If you had been following the conversation the argument is XSS doesn't have enough RAM. The X1X has more RAM and bandwidth but fewer graphical features on this game. Again it's almost like RAM isn't the end all be all.

I have not seen a head to head comparison of the X1X and the XSS but where did you hear that 1080p is higher than 1440p checkboard? How does the X1X 4k mode run better than the XSS with raytracing when one is at 30 and the other is higher than 30 but unstable? If anything it means Capcom needs to lock the framerate of the raytracing mode on the XSS to 30 something I mentioned earlier. That design choice is not the fault of the XSS. It's clear there is some headroom if the framerate was lower.
Checkerboard 1440p is 1280 x 1440, 4k is 1920 x 2160.
One X 4k CB mode average 51fps (VRR territory), Series S 1440p CB RT mode average 37fps.
One X 1080p native mode average 60fps, Series S fps 1440p CB Non-RT mode 59(.76)fps.
 
Last edited:
Has Dictator said tier 2 is not possible via software? Because if so he is completely wrong. A developer precised time ago (posting in the same neogaf if I'm not wrong or via Twitter) saying a software VRS can be even better than an hardware VRS. Tier 1 or 2 nomenclature changes nothing.

Go to fuck yourself Riky you and your stupid fanboy gif

I mean it's not really a big loss or anything.

VRS solutions can performed through software as well, one could argue not as effective as the hardware solution but they're still pretty great. Call of Duty have for years now been using a software based VRS solution which actually beats the fixed function hardware (in Tier 1 I'm guessing).

I'm with Matt Hargett on this one though, the ultimate performance gains will be achieved through Primitive and Mesh Shaders (which are also great for VRS btw).

It was the COD Dev that implemented VRS in software and the key takeaway was that it can end up being superior due to increased flexibility in the selected granularity of the effect.
 

OverHeat

« generous god »
There was no downloadable update, you just restarted your console and the game option appeared. Pretty amazing stuff.
Played the Series S and One X versions of Village and the Series S loads about ten times faster and has superior image quality, what tiny drops it has are taken care of by VRR so I wouldn't play on One X if I had the choice although it's still a fine way to play.
You only got a series S….holy shit
 

M1chl

Currently Gif and Meme Champion
Interesting to see what Coalition do with UE5 on the Xbox consoles. Will the lack of a powerful IO compared to the PS5 result in concessions.
It's not that UE5 needs PS5, it's more like UE5 needs PS5 for that scene showed in the demo. I am sure that UE5 is going to be used as mobile game engine as well as for big games and movies.

The issue with Coalition is that, they do lack heavily on creative side of things, while their technical department is actually really good.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
We really should wait for actual games and some detailed info before declaring anything about UE5, or what is actually feasible detail wise w/ better I/O.

I'm perfectly happy with the bestest load times myself, and skeptical much else is gonna matter.
 

Stuart360

Member
It's not that UE5 needs PS5, it's more like UE5 needs PS5 for that scene showed in the demo. I am sure that UE5 is going to be used as mobile game engine as well as for big games and movies.

The issue with Coalition is that, they do lack heavily on creative side of things, while their technical department is actually really good.
Harsh. I think they tried to mix things up with Gears 4, and especially 5, but i feel there is only so much you can do with Gears big dude third person cover gameplay. I think Gears needs to go full open world to stand out.
 

M1chl

Currently Gif and Meme Champion
Harsh. I think they tried to mix things up with Gears 4, and especially 5, but i feel there is only so much you can do with Gears big dude third person cover gameplay. I think Gears needs to go full open world to stand out.
Since I replayed recently the OG trilogy, I feel like being even harsher, especially for 4th game, which was after 3rd pretty bad. Mighty Bleszinski it's not so easy to replace probably...
 
Interesting to see what Coalition do with UE5 on the Xbox consoles. Will the lack of a powerful IO compared to the PS5 result in concessions.
Spock Encerio GIF


There will be no concessions compared to PS5. Series X has its own powerful I/O solution.

 
Checkerboard 1440p is 1280 x 1440, 4k is 1920 x 2160.
One X 4k CB mode average 51fps (VRR territory), Series S 1440p CB RT mode average 37fps.
One X 1080p native mode average 60fps, Series S fps 1440p CB Non-RT mode 59(.76)fps.
Where did you get your info on the 1440p checkerboarding? I am not surprised that 1440p checkerboarding is less taxing than 1440p native. I AM surprised that 1440p checkerboarding is less taxing than 1080p. What happens when a 1440p checkerboarded image is displayed on a 1080p set? Artifacts might be seen when 4K checkerboard is shown on a 4k set but I'd imagine a 4k checkerboard image on a 1080p set would look pretty good. Perhaps this is what MS was going for when they were talking 1440p gaming.
 

Mr Moose

Member
Where did you get your info on the 1440p checkerboarding? I am not surprised that 1440p checkerboarding is less taxing than 1440p native. I AM surprised that 1440p checkerboarding is less taxing than 1080p. What happens when a 1440p checkerboarded image is displayed on a 1080p set? Artifacts might be seen when 4K checkerboard is shown on a 4k set but I'd imagine a 4k checkerboard image on a 1080p set would look pretty good. Perhaps this is what MS was going for when they were talking 1440p gaming.
That's what checkerboarding is, 1280 x 1440 for 1440p CB and so on, this has been known for years.


 
Last edited:

MistBreeze

Member
I have been playing re village

but god damn

ps5 first year games first and third party are leagues better than ps4 first year it is not even close many quality games

I mean we got : miles morales - demons souls remake - astrobot - sack boy - hitman 3 - returnal - re village until now

ratchet next month

varied genres too

all are quality great games

these consoles have ssd tech- great GPUs and CPUs backward compatible with boosted old games

actually this year Im waiting for forbidden west , halo infinite, battlefield 6 and god of war ragnarok if it comes this year

hope these games live up to expectations

Im planning to buy series x if halo infinite turn out to be great and I have bunch of old xbox 360 discs and digital games want to revisit them in BC mode
 
Last edited:
There will be no concessions compared to PS5. Series X has its own powerful I/O solution.
I think you might be wrong there. Yes series X has its own powerful solution but even without looking at latency you’re looking at a max 4.8GB/s of streaming versus a max 17GB/s of streaming. Then there is the special I/O hardware complex of the PS5 which is unique and specifically built for ultra low latency and other things while the Xbox solution is way more standard parts.
 

Godfavor

Member
I think you might be wrong there. Yes series X has its own powerful solution but even without looking at latency you’re looking at a max 4.8GB/s of streaming versus a max 17GB/s of streaming. Then there is the special I/O hardware complex of the PS5 which is unique and specifically built for ultra low latency and other things while the Xbox solution is way more standard parts.
4.8gb/sec is the average according to MS. 8-9 gb/sec is the average according to Sony.
 
Last edited:

skit_data

Member
4.8gb/sec is the average according to MS. 8-9 gb/sec is the average according to Sony.
The 8-9 GB/sec was provided before Oodle texture was announced. When games take advantage of both Kraken and Oodle texture compression it can reach 17.38 GB/sec. The 8-9 GB/sec is probably still in specifications all over the place though.

 
Last edited:
4.8gb/sec is the average according to MS. 8-9 gb/sec is the average according to Sony.
Those are the original values given by Sony, it’s much higher now because of the oodle compression etc. which wasn’t available at that time. I’m pretty sure series X maxes out at 4.8 and it might also be an average at the same time. I could be wrong there but as I remember is 2.4 raw and max 4.8 using compression etc.
 

dcmk7

Banned
That's what checkerboarding is, 1280 x 1440 for 1440p CB and so on, this has been known for years.



This won't get responded too..

He will pop up again in a new thread and basically start again, ignoring any information that doesn't fit his narrative.. lying and strawmaning his way through arguments with a spot of SonyToo thrown into the mix. All dull and tedious.
 

IntentionalPun

Ask me about my wife's perfect butthole
Those are the original values given by Sony, it’s much higher now because of the oodle compression etc. which wasn’t available at that time. I’m pretty sure series X maxes out at 4.8 and it might also be an average at the same time. I could be wrong there but as I remember is 2.4 raw and max 4.8 using compression etc.

Nah XSX max theoretical is a bit higher than that; although using best case scenarios is flawed.

PS5 has 4 multipliers over Xbox in this area either way; much faster raw speed, significantly better compression ratio w/ Kraken, faster decompression, and using less CPU resources.

It's an I/O beast.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Where did you get your info on the 1440p checkerboarding? I am not surprised that 1440p checkerboarding is less taxing than 1440p native. I AM surprised that 1440p checkerboarding is less taxing than 1080p. What happens when a 1440p checkerboarded image is displayed on a 1080p set? Artifacts might be seen when 4K checkerboard is shown on a 4k set but I'd imagine a 4k checkerboard image on a 1080p set would look pretty good. Perhaps this is what MS was going for when they were talking 1440p gaming.
1920 * 1080 = 2073600 pixels
1280 * 1440 (1440CB) = 1843200

It's about 10% less taxing.

edit: I think this is wrong, but I'm sticking with it.

I is smart.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
That is not actually what happens... CB uses GPU processing to generate the final image.

1440p CB will be less taxing than 1440p around 20-30%.
But it is more taxing than 1080p.
It renders half the pixels of 1440p.. I don't think the interpolation (or rather.. lack of it) takes anywhere near enough resources to make it only a 20-30% improvement.

From a pure rendering standpoint; it's basically half the pixels.. the other aspects of certain CB implementations are doing more than just calculating the full frame from the half frames anyways adding AA and whatnot, so can't really compare directly to just a plain native render.
 
Last edited:

ethomaz

Banned
It renders half the pixels of 1440p.. I don't think the interpolation takes anywhere near enough resources to make it only a 20-30% improvement.
It takes.

You can compared 4k native vs 4k CB and the CB one is just around 20-30% lighter for the GPU.

Frostbite devs made a presentation in GDC and showed Battlefield running on PS4 Pro in 1800p and 1800pCB and the difference was 24% in performance.

Edit - You can see here if you are interested:

 
Last edited:
That is not actually what happens... CB uses GPU processing to generate the final image.

1440p CB will be less taxing than 1440p around 20-30%.
But it is more taxing than 1080p.
So after all is said is 1440p CB cheaper or more expensive to render than 1080p? I am hearing two different answers. If it is cheaper there is no reason to ever render at 1080p if you can render at 1440p CB and scale down the image to make it look better on native 1080p screens right? If it's more expensive then the argument that 1440p is cheaper is wrong since the XSS is actually working harder to run RE at 1440p CB than the X1X is running the game at 1080p.
 
Last edited:
I think you might be wrong there. Yes series X has its own powerful solution but even without looking at latency you’re looking at a max 4.8GB/s of streaming versus a max 17GB/s of streaming. Then there is the special I/O hardware complex of the PS5 which is unique and specifically built for ultra low latency and other things while the Xbox solution is way more standard parts.
With Sampler Feedback Streaming + the dedicated decompression hardware you're looking at 12GB/s of streaming for textures on Xbox Series X. Sampler Feedback Streaming directly impacts effective SSD streaming speed due to the average 2.5x memory efficiency benefit. Microsoft points out that it benefits their SSD streaming speed also due to the memory efficiency gains.

Without dedicated decompression and just Sampler Feedback Streaming it would be 6GB/s worth of streaming.
 
With Sampler Feedback Streaming + the dedicated decompression hardware you're looking at 12GB/s of streaming for textures on Xbox Series X. Sampler Feedback Streaming directly impacts effective SSD streaming speed due to the average 2.5x memory efficiency benefit. Microsoft points out that it benefits their SSD streaming speed also due to the memory efficiency gains.

Without dedicated decompression and just Sampler Feedback Streaming it would be 6GB/s worth of streaming.
I keep wondering how people keep coming up with these fantasy numbers, the absolute theoretical max is 6gb/s and that encompasses all those technologies. Where did ms communicate those speeds? I think they communicated max 4.8, or maybe 6 somewhere. I can imagine though it’s hard to find the true speed because ms likes to hide everything that doesn’t compare well, like also sales numbers and profitability of game pass. Also, using all these software solutions take away from CPU power but that is another story altogether.
 

ethomaz

Banned
So after all is said is 1440p CB cheaper or more expensive to render than 1080p? I am hearing two different answers. If it is cheaper there is no reason to ever render at 1080p if you can render at 1440p CB and scale down the image to make it look better on native 1080p screens right? If it's more expensive then the argument that 1440p is cheaper is wrong since the XSS is actually working harder to run RE at 1440p CB than the X1X is running the game at 1080p.
1440p CB is definitely expensive than 1080p native in performance terms.
It is cheaper than 1440p by around 20-30%.

The doc I posted shows that pretty well.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
It takes.

You can compared 4k native vs 4k CB and the CB one is just around 20-30% lighter for the GPU.

Frostbite devs made a presentation in GDC and showed Battlefield running on PS4 Pro in 1800p and 1800pCB and the difference was 24% in performance.

Edit - You can see here if you are interested:

Thanks.. wonder why their frame time dropped like 30% in the 2160p native vs 1800p native though? It's like a ~20% rez drop with ~30% frame time drop (head math, forgive me, but it's something like that)
 
Last edited:

ethomaz

Banned
I keep wondering how people keep coming up with these fantasy numbers, the absolute theoretical max is 6gb/s and that encompasses all those technologies. Where did ms communicate those speeds? I think they communicated max 4.8, or maybe 6 somewhere. I can imagine though it’s hard to find the true speed because ms likes to hide everything that doesn’t compare well, like also sales numbers and profitability of game pass. Also, using all these software solutions take away from CPU power but that is another story altogether.
2.4GB/s is theoretical max SSD speeds in Series X... everything else is based in how much you can compress and decompress.

And that has a lot of options because we have two types of algorithms: Lossless and Loosy.

Lossless is like Zip and the final result is identical to the original file... in that mode if you can reach 50% compression (that is very high) you will have the 2.4GB/s becoming a 4.8GB/s.

In Loosy the things are a bit more complicated and depend how much you want to lose in quality... with heavy lose of quality it is possible to reach 70-80% compression and you can reach 8-12GB/s with the 2.4GB/s bandwidth.

Remember that to that work the file you compressed needs to decompressed in less than half of your render time before it can be used to not affect the render time... that means too heavy Lossless or Loosy algorithms (bigger compressions ratio) won’t be used unless you have a really strong decompression unit.

You can make some experiments... take a 4k RAW texture and try to convert it to PNG (Lossless) and JPEG (Loosy) and you will have an ideia of how much time by level of compression (PNG has 1-9 levels and JPEG 1-100)... and see the actual results of the final compressed texture.

Of course consoles are using more specialized algorithms but it will give you an ideia about the results and what you can give up in terms of quality.
 
Last edited:
Has Dictator said tier 2 is not possible via software? Because if so he is completely wrong. A developer precised time ago (posting in the same neogaf if I'm not wrong or via Twitter) saying a software VRS can be even better than an hardware VRS. Tier 1 or 2 nomenclature changes nothing.

Go to fuck yourself Riky you and your stupid fanboy gif
The COD modern warfare engine should be one of the less compatible engine with software VRS and the devs found out software VRS was better than hardware VRS in most aspects: more flexible, better perf, better results. This is a rough summary of this paper:

 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
I keep wondering how people keep coming up with these fantasy numbers, the absolute theoretical max is 6gb/s and that encompasses all those technologies. Where did ms communicate those speeds? I think they communicated max 4.8, or maybe 6 somewhere. I can imagine though it’s hard to find the true speed because ms likes to hide everything that doesn’t compare well, like also sales numbers and profitability of game pass. Also, using all these software solutions take away from CPU power but that is another story altogether.

They are using SFS as a multiplier; it isn't as direct as compression.. but it effectively is doing the same thing, and also using less memory (compared against traditional texturing techniques.)

It's still not really catching up to PS5, and we don't know how effective it is in real world scenarios compared to compression w/ is far more predictable though.

MS claims a 2x or 3x multiplier (use half to a third the texture data, to render the same target imagery.) However it's not the only game in town as far as avoiding loading of texture data .. really shouldn't be used as a multiplier the way it is or at least done with a giant asterisk.
 
Last edited:

Boglin

Member
I'm curious after seeing people post the Xbox's theoretical/practical bandwidth when using SFS as a multiplier. Has anyone done the math for PS5 and PRT+ to see what its equivalent practical speed would be? I couldn't find it doing a quick Google search.
 
1440p CB is definitely expensive than 1080p native in performance terms.
It is cheaper than 1440p by around 20-30%.

The doc I posted shows that pretty well.
So 1440p CB is MORE taxing than 1080p. That means the XSS is working HARDER to render RE at 60 fps with no RT than the X1X is running it at 1080p. So my original point was right all along and the XSS is clearly more performant than the most powerful console last generation like it should be.

I wonder why people have such a dislike for an affordable lower resolution console. It is clearly running the way it should be based on developer effort and it does things last generations consoles could not for a price less than other consoles this generation. It is so weird to see people ignore devs saying the system isn't an issue, people make up weaknesses the console doesn't have or compare it to the Switch. It is a sight to behold. I get it if you want a 4K console and this isn't that but to make up things about it strikes me as so odd.

I'm looking forward to see how the console performs when SFS and VA are actually used. Then perhaps we can really see what it can do. For now I'm certain more disingenuous potshots are incoming.
 
I keep wondering how people keep coming up with these fantasy numbers, the absolute theoretical max is 6gb/s and that encompasses all those technologies. Where did ms communicate those speeds? I think they communicated max 4.8, or maybe 6 somewhere. I can imagine though it’s hard to find the true speed because ms likes to hide everything that doesn’t compare well, like also sales numbers and profitability of game pass. Also, using all these software solutions take away from CPU power but that is another story altogether.

It's barely taking away any CPU power at all. It uses just one tenth of a single CPU core to handle all of this. Microsoft has confirmed this multiple times now. A PC without DirectStorage would require 13 Zen 2 cores to match what Series X does. It's far from a software solution. There's a good bit of hardware involved. That's how it comes down to one tenth of a single core.

uYmHOag.jpg


What also puzzles me about people calling the Series X solution software is that it has a full dedicated hardware decompression unit built that supports zlib as well as Microsoft's custom texture compression BCPack. The hardware decompression unit can deliver over 6GB/s.

CuV394H.jpg


Then there's the custom designed SSD itself, where Microsoft confirms a guaranteed minimum of 2GB/s at all times no matter what. It'll be 2.4GB when the hardware or OS isn't doing any other type of work, such as maintenance. The SSD was designed around sustained performance as opposed to peak performance.

So taking Microsoft's guaranteed minimum the numbers change with Sampler Feedback Streaming to 5GB/s without using the hardware decompression and 10GB/s with hardware decompression. So we are no longer talking about "theoreticals" These are Microsoft's guaranteed minimum SSD streaming figures. The reason Microsoft can guarantee those minimums is due to the custom work done on their SSD.

And before people go screaming "omg an SSD downgrade"

Here is Microsoft explaining the spec has not changed, but they explain in more detail what they mean.

gDcn8s8.jpg
pvTQ14j.jpg


They also touch on it some in a blog post last year.

GnBHtSY.jpg
 
The more resolution and objects on the 3D sphere that are composed when rendering 3D audio, the more detailed and immersive the audio will be, but the amount of computation will also increase, so there is a limit to how much the CPU, GPU, or general DSP can handle. While allocating computing resources to the game, we want to allocate as much resources as possible to audio. In short, we came to the conclusion that in order to get rid of the resource limitation, it is desirable to install dedicated hardware.
We are also working on the development of virtual speakers for TVs. We want to deliver an experience that is as similar to headphones as possible.

However, the problem is the cancellation of crosstalk (the mixing of sounds). This part is especially important for TV optimization. TV speakers are not the ideal environment to enjoy 3D audio, but the key is to see how close to the ideal experience you can get in the end.

We will also promote support for sound bars and other devices, although their nature is different.

The important thing is that we want to deliver 3D audio to everyone. If we only offer ...... to those who have amazing audio sets, we will not be able to expand our touch points. We would like to gradually expand this service to TVs as well, including by type.
https://av.watch.impress.co.jp/docs/series/rt/1323407.html
 

I knew I seen your avatar somewhere before.

 

Imtjnotu

Member
With Sampler Feedback Streaming + the dedicated decompression hardware you're looking at 12GB/s of streaming for textures on Xbox Series X. Sampler Feedback Streaming directly impacts effective SSD streaming speed due to the average 2.5x memory efficiency benefit. Microsoft points out that it benefits their SSD streaming speed also due to the memory efficiency gains.

Without dedicated decompression and just Sampler Feedback Streaming it would be 6GB/s worth of streaming.
It does not work that way and you know it
 

IntentionalPun

Ask me about my wife's perfect butthole
If one machine can render the same game, with half the textures loaded, in effect it's the same as being able to compress twice as much on your "bandwidth to render a target image", while also halving the RAM it ends up using (as there is nothing to decompress.)

It's really not hard to understand.. the real question surrounds how real world those numbers are, how hard is it to use, is it useful for all textures or only some, etc. Whereas compression is just straightforward and works across all textures the same way.

It's also not the only potential method to reduce the loading of textures, so there's that. It’s not entirely clear what MA is comparing against; they do mention “streaming” so I think it’s regular PRT.
 
Last edited:

elliot5

Member
Has Dictator said tier 2 is not possible via software? Because if so he is completely wrong. A developer precised time ago (posting in the same neogaf if I'm not wrong or via Twitter) saying a software VRS can be even better than an hardware VRS. Tier 1 or 2 nomenclature changes nothing.

Go to fuck yourself Riky you and your stupid fanboy gif
He did not. He has said Tier 2 VRS is a feature supported by the GPU hardware (aka through DX12U or Vulkan on RDNA2 cards and whatever Nvidia architecture). That is a fact.

Xbox is supportive of this RDNA2 hardware feature for VRS Tier 2, PS5 is not. That is a fact.

Infinity Ward creating a software based VRS solution is fine. In fact, it's great.

This isn't to say that this means Xbox >>>> PS5 now in performance and PlayStation can't have a similar implementation through other means like software solutions.

He has not said anything wrong about VRS and it's supported platforms as far as I'm aware. I just reviewed his video on VRS and read the messages by him. He even acknowledged Modern Warfare's VRS before those comments/video. People have a hate boner for DF I guess and are misconstruing things in their console wars.
 
Last edited:
He did not. He has said Tier 2 VRS is a feature supported by the GPU hardware (aka through DX12U or Vulkan on RDNA2 cards and whatever Nvidia architecture). That is a fact.

Xbox is supportive of this RDNA2 hardware feature for VRS Tier 2, PS5 is not. That is a fact.

Infinity Ward creating a software based VRS solution is fine. In fact, it's great.

This isn't to say that this means Xbox >>>> PS5 now in performance and PlayStation can't have a similar implementation through other means like software solutions.

He has not said anything wrong about VRS and it's supported platforms as far as I'm aware. I just reviewed his video on VRS and read the messages by him. He even acknowledged Modern Warfare's VRS before those comments/video. People have a hate boner for DF I guess and are misconstruing things in their console wars.
So what component of the GPU shared on RDNA 2 PC cards, nvidia cards, and XSX/S APUs is responsible for hardware VRS tier 2?
 
Last edited:

Boglin

Member
If one machine can render the same game, with half the textures loaded, in effect it's the same as being able to compress twice as much on your "bandwidth to render a target image", while also halving the RAM it ends up using (as there is nothing to decompress.)

It's really not hard to understand.. the real question surrounds how real world those numbers are, how hard is it to use, is it useful for all textures or only some, etc. Whereas compression is just straightforward and works across all textures the same way.

It's also not the only potential method to reduce the loading of textures, so there's that.
I don't think people have issue understanding that it's streaming in only the portions it needs, therefore leaving out data it doesn't need and increasing effective memory. That's the largest key benefit local hardware streaming has always had.

The issue I have is that numbers are never given within a well defined context. This is what Jason Ronald's article on Xbox.com states about SFS: "This innovation results in approximately 2.5x the effective I/O throughput and memory usage above and beyond the raw hardware capabilities on average. SFS provides an effective multiplier on available system memory and I/O bandwidth, resulting in significantly more memory and I/O throughput available to make your game richer and more immersive."

To me, the bolded sounds like it's being compared strictly against the peak hardware bandwidths and not against other similar streaming methods such as PRT or PRT+. Hopefully if I'm wrong, someone will correct me and show me a source that is more clear.

In the Xbox vs Playstation debates, it often sounds like people are comparing one machine that can render a scene using 2.5x less textures loaded(SFS) vs one that requires all the textures(No SFS) but I don't think that is an accurate representation of these consoles.

Shouldn't we be comparing a machine with SFS vs PRT+? I want to know if it is more akin to a 2.5x multiplier vs a 2.0x multiplier or whatever the real numbers.
 

Bo_Hazem

Banned
Has Dictator said tier 2 is not possible via software? Because if so he is completely wrong. A developer precised time ago (posting in the same neogaf if I'm not wrong or via Twitter) saying a software VRS can be even better than an hardware VRS. Tier 1 or 2 nomenclature changes nothing.

Go to fuck yourself Riky you and your stupid fanboy gif

Yup, top tier developers of Call of Duty:


AGmjwWm.jpg


6vzU7va.jpg


nUgXFPy.jpg


8CIQCmw.jpg


pqrAwfQ.jpg


0OX2umR.jpg


0iQiyWX.jpg


ujmsbim.jpg


ldVqAjF.jpg


D11zJwB.jpg


XTs46u6.jpg
 

elliot5

Member
So what component of the GPU shared on RDNA 2 PC cards, nvidia cards, and XSX/S APUs are responsible for hardware VRS tier 2?
I don't fuckin know man, I'm not a graphics engineer.
"Tier 2 VRS allowed Gears 5/Tactics to see an up to 14% boost in GPU perf with no perceptible impact to visual quality. It is available on all hardware supporting DirectX 12 Ultimate, including Xbox Series X|S, AMD Radeon™ RX 6000 Series graphics cards, and NVIDIA GeForce RTX 20 Series and 30 Series GPUs."

Linux drivers also require RDNA2 AMD 6000 series cards to use variable rate shading optimizations.

Whatever it is Nvidia and AMD introduce at the hardware level in their architecture that Turing and Ampere and RDNA2 have that prevents VRS Tier 2 from being used on older graphics cards. Or maybe it's the drivers. Idk. I'm just showing that it is a requirement.
 
Status
Not open for further replies.
Top Bottom