• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Lysandros

Member
Did anyone else notice that PS4 PRO fares quite a bit better against Xbox One X lately? Maybe it's true that the machine has NGG fast path from Vega (apparently unused in discrete PC versions) and it's finally being leveraged beause of the transition to PS5? There are some discussions about it in beyond3d forums.
 
Last edited:

roops67

Member
Can't this be also related to cache bandwidth or pixel fill rate?
This won't have anything to do with cache bandwidth. If it was pixel fill rate then would expect this would be apparent more universally than just specifically affecting transparencies (I'm guessing). To apply a transparency it has to read in from video memory what it is applying the transparency to, apply the transparency with the other image and write it back to memory, so double the bandwidth than just drawing opaque
 

huraga

Banned
That's not true at all. Games runs better on ps5 not because it's faster (a total nonsense) but I most suspect probably because it hasn't the weird RAM/bandwidth configuration of series X. In any case games to have more CUs gives advantage too, it not needs particularly crazy recoding stuff to push them from what I heard.
Weird RAM/bandwith ? please, let me know the benefits to have unified memory with only with sigle bus for everything?

I don´t know your systems knowledge but not always unified memory with a single bus is a good option. Mostly because you can´t read from that bus from the CPU and GPU at same time without lose performance.

Xbox really doesn´t have divided memory, Xbox use two different access to this memory, one faster and another one slower, because for example the SO, audio and other functions don´t need the same speed than the GPU.

I don´t know why many people thinks that unified memory and only one bus is better. Really I don´t know in what they are supporting their argument.

Btw. Series X also has SFS and it means that it needs almost 3 times less memory to do the same than without this technology.
 

huraga

Banned
Somewhere it is rather simple even though the underlying root cause might be technically complicated - and would be great from a development point of view to hear about. I must point out that many people pointed out this potential difference between the consoles before we had them in our hands.

If the XSX got all the 12 Tflops etc out on screen the frame drops would not happen. Way to often we miss the fineprint around those numbers - it is the theoretical max. That means that the PS5 - in those scenes - has a computational advantage or in other words the CUs of the XSX are not being fed with tasks to perform - or in other words the I/O system of the XSX is bottlenecking the GPU.

What is the root cause of that bottle-neck? I do not know - some developers must have identified where the challenge is - but so far two things are interesting. Firstly, that the PS5 is built bottom-up for highly efficient I/O from a design goal point of view and Sony seems to have succeeded in achieving that and secondly, how consistent the pattern has been since the launch regarding the XSX - the vast majority of games has these issues on the platform - a pattern has emerged.
Why do you think that there is a bottleneck? Really do you believe that engineers don´t have ways to calculate it? How can you judge the work of several years of engineers in a forum without being an expert in this?

Maybe the bottleneck are some current engines from the last generation.

I think a lot of people overvalue developers too much and undervalue systems engineers too much.

And believe me that a developer is more likely to be wrong than a systems engineer. It is much easier to find faults in an engine or in a code than in the design of a system and more with current technology.
 
Last edited:

huraga

Banned
This won't have anything to do with cache bandwidth. If it was pixel fill rate then would expect this would be apparent more universally than just specifically affecting transparencies (I'm guessing). To apply a transparency it has to read in from video memory what it is applying the transparency to, apply the transparency with the other image and write it back to memory, so double the bandwidth than just drawing opaque

Yes, it could be one reason. PS5 has a higher pixel fillrate (not texture, its lower) because it has more mhz. Non-sustained fill rate, important to note this since it is variable and here we always talk about the maximum theoretical performance of the PS5, but not the sustain.

The engines we currently have are likely to benefit more from a high pixel rate than from high computing power, but the new engines will be more complicated and will need more computing power (RT, MS, AI, etc.)

I don't mean to say that Series X will necessarily perform better, because in the end they will be very similar, but Series X is likely to be a little more prepared for the future.
 
Last edited:

roops67

Member
Yes, it could be one reason. PS5 has a higher pixel fillrate (not texture, its lower) because it has more mhz. Non-sustained fill rate, important to note this since it is variable and here we always talk about the maximum theoretical performance of the PS5, but not the sustain.

The engines we currently have are likely to benefit more from a high pixel rate than from high computing power, but the new engines will be more complicated and will need more computing power (RT, MS, AI, etc.)

I don't mean to say that Series X will necessarily perform better, because in the end they will be very similar, but Series X is likely to be a little more prepared for the future.
To give you some context, this is what my conversation is about...
One thing that has been consistent on the XSX, is the frame drops when there's more alpha effects transparencies on screen no matter what the game. This points to bottleneck on memory bandwidth

Can't this be also related to cache bandwidth or pixel fill rate?

This won't have anything to do with cache bandwidth. If it was pixel fill rate then would expect this would be apparent more universally than just specifically affecting transparencies (I'm guessing). To apply a transparency it has to read in from video memory what it is applying the transparency to, apply the transparency with the other image and write it back to memory, so double the bandwidth than just drawing opaque
I don't think it's in line with what you're stating
 

Boglin

Member
Weird RAM/bandwith ? please, let me know the benefits to have unified memory with only with sigle bus for everything?

I don´t know your systems knowledge but not always unified memory with a single bus is a good option. Mostly because you can´t read from that bus from the CPU and GPU at same time without lose performance.

Xbox really doesn´t have divided memory, Xbox use two different access to this memory, one faster and another one slower, because for example the SO, audio and other functions don´t need the same speed than the GPU.

I don´t know why many people thinks that unified memory and only one bus is better. Really I don´t know in what they are supporting their argument.

Btw. Series X also has SFS and it means that it needs almost 3 times less memory to do the same than without this technology.
Unified memory allows both the CPU and GPU to access the same data without the need to copy across both memory buses, effectively increasing bandwidth and reducing latency as well as being easier for developers to take advantage of.

Microsoft knows unified memory is better and that's exactly why they also have unified memory. XSX memory configuration is a compromise to reach higher memory bandwidth for GPU functions at a lower cost, which I believe is a good decision. However, the GPU is limited to a smaller pool than it potentially could have or else it will hit a very large bandwidth bottleneck since it can't use the high bandwidth 10GB and low bandwidth 6GB portions concurrently. 10GB might be enough though so we'll see as time goes on.

If you want to find out more about why unified memory is sought after, all you have to do is read old developer interviews and technical discussions from the X360/PS3 era.

SFS is 3x better when compared to having no texture streaming feature at all but how much better is it than plain ol' PRT?
 

Elog

Member
Why do you think that there is a bottleneck? Really do you believe that engineers don´t have ways to calculate it? How can you judge the work of several years of engineers in a forum without being an expert in this?
Because the theoretical max computational power is higher for the XSX than the PS5 with a few exceptions meaning that in those scenes where XSX underperforms the actual computational power of the XSX is lower than the PS5 - that is why you see fewer frames being rendered.

The way that happens is that the utilisation rate of the hardware is significantly lower in the XSX than in the PS5 - that is due to I/O (cache utilisation, memory speed/latency/bandwidth etc).

You are arguing that this is software based - that might be true. The consistency in results across games/engines argues against that though and points towards an architecture difference. Occam's razor right now is that the cache per CU is lower in the XSX than in the PS5 and on top of that the cache is slower (Hz) and finally that there are to our knowledge nothing like cache scrubbers in the XSX making the cache management more efficient in the PS5. If I would bet today I would bet that the XSX caches taps out more frequently on the XSX than on the PS5 and that explains more or less exactly what we see. I admit this is a guess.
 

huraga

Banned
Unified memory allows both the CPU and GPU to access the same data without the need to copy across both memory buses, effectively increasing bandwidth and reducing latency as well as being easier for developers to take advantage of.

Microsoft knows unified memory is better and that's exactly why they also have unified memory. XSX memory configuration is a compromise to reach higher memory bandwidth for GPU functions at a lower cost, which I believe is a good decision. However, the GPU is limited to a smaller pool than it potentially could have or else it will hit a very large bandwidth bottleneck since it can't use the high bandwidth 10GB and low bandwidth 6GB portions concurrently. 10GB might be enough though so we'll see as time goes on.

If you want to find out more about why unified memory is sought after, all you have to do is read old developer interviews and technical discussions from the X360/PS3 era.

SFS is 3x better when compared to having no texture streaming feature at all but how much better is it than plain ol' PRT?
It has the advantage that they can read from the same bus at the same time, but when doing it simultaneously, performance is greatly penalized. As long as the system allows the CPU and GPU to read at the same time.

I precisely explained before that the SX memory works as a unified memory with two access buses, one slower and the other faster. But I also say that unified memory is not the holy grail and it all depends on the goal you seek.

I don´t need reat those forums because I read that many of them years ago.I have been playing consoles and PCs for more than 35 years and had my first PC when I was 6 years old.


About sfs...


8iCLlGG.jpg
 

huraga

Banned
Because the theoretical max computational power is higher for the XSX than the PS5 with a few exceptions meaning that in those scenes where XSX underperforms the actual computational power of the XSX is lower than the PS5 - that is why you see fewer frames being rendered.

The way that happens is that the utilisation rate of the hardware is significantly lower in the XSX than in the PS5 - that is due to I/O (cache utilisation, memory speed/latency/bandwidth etc).

You are arguing that this is software based - that might be true. The consistency in results across games/engines argues against that though and points towards an architecture difference. Occam's razor right now is that the cache per CU is lower in the XSX than in the PS5 and on top of that the cache is slower (Hz) and finally that there are to our knowledge nothing like cache scrubbers in the XSX making the cache management more efficient in the PS5. If I would bet today I would bet that the XSX caches taps out more frequently on the XSX than on the PS5 and that explains more or less exactly what we see. I admit this is a guess.
But what consistency in results are you talking about? of the few games that came out where all, absolutely all use old engines? Hasn't Series X won some battles?
Hasn't it been exaggerated when PS5 has won, when most of the time it was something insignificant?

The current trend is not that. Check out the latest games that have been released. Hitman 3, the division update, the control update, and the outriders demo.

Hasn't the trend changed something? In general, have you seen more resolution in Series X in the last titles?

Do you think that increasing more than 20-40% resolution is easier than keeping 3 more frames on "average"?

In addition, a lot has been lied here. People putting snapshots of photos at different times where there were 15-frame downloads that maybe lasted two tenths of a second or less than a second. Hinting that there was a difference of 15 or 20 fps when on average this was many times 1 or 2 frames.
 
Last edited:

Lysandros

Member
This won't have anything to do with cache bandwidth. If it was pixel fill rate then would expect this would be apparent more universally than just specifically affecting transparencies (I'm guessing). To apply a transparency it has to read in from video memory what it is applying the transparency to, apply the transparency with the other image and write it back to memory, so double the bandwidth than just drawing opaque
I see, which componants are responsible for this reading and writing process to apply transparency in this case, are the caches bypassed?
 
Last edited:

bitbydeath

Member
Because the theoretical max computational power is higher for the XSX than the PS5 with a few exceptions meaning that in those scenes where XSX underperforms the actual computational power of the XSX is lower than the PS5 - that is why you see fewer frames being rendered.

The way that happens is that the utilisation rate of the hardware is significantly lower in the XSX than in the PS5 - that is due to I/O (cache utilisation, memory speed/latency/bandwidth etc).

You are arguing that this is software based - that might be true. The consistency in results across games/engines argues against that though and points towards an architecture difference. Occam's razor right now is that the cache per CU is lower in the XSX than in the PS5 and on top of that the cache is slower (Hz) and finally that there are to our knowledge nothing like cache scrubbers in the XSX making the cache management more efficient in the PS5. If I would bet today I would bet that the XSX caches taps out more frequently on the XSX than on the PS5 and that explains more or less exactly what we see. I admit this is a guess.
Further to this there has been an architectural shift that software developers such as Epic and Square are updating their engines towards which will push PS5 even further ahead than what it is today because it is all about taking advantage of the SSD/IO speeds. XSX will see benefits here too though.
 

Elog

Member
But what consistency in results are you talking about? of the few games that came out where all, absolutely all use old engines? Hasn't Series X won some battles?
Hasn't it been exaggerated when PS5 has won, when most of the time it was something insignificant?

The current trend is not that. Check out the latest games that have been released. Hitman 3, the division update, the control update, and the outriders demo.

Hasn't the trend changed something? In general, have you seen more resolution in Series X in the last titles?

Do you think that increasing more than 20-40% resolution is easier than keeping 3 more frames on "average"?

In addition, a lot has been lied here. People putting snapshots of photos at different times where there were 15-frame downloads that maybe lasted two tenths of a second or less than a second. Hinting that there was a difference of 15 or 20 fps when on average this was many times 1 or 2 frames.
The consistency I talk about is that the XSX has had a harder time across the board to maintain stable 60/120 FPS. Maybe there is an exception but I actually believe this is almost 100% true - sometimes this has come with a slight resolution advantage in some scenes for the XSX (I.e. the highest resolution in the dynamic range is higher on the XSX than on the PS5) but the average resolution is basically the same between the two consoles.

I think the above speak about two things: The theoretical max computational power is higher on the XSX so in some scenes where all those TFLOPs come out you see a section of higher resolution but on average there is none (i.e. on average the hardware utilisation is lower on the XSX than on the PS5 since they perform the same with a difference in theoretical max) - and it speaks to a larger variance - an inconsistency - in performance on the XSX which speaks to bottle necking of the hardware with deeper troughs.

As I said - maybe it is software related and can be fixed through the SDK - but the consistency is really speaking of an architectural challenge.
 
Last edited:

Boglin

Member
It has the advantage that they can read from the same bus at the same time, but when doing it simultaneously, performance is greatly penalized. As long as the system allows the CPU and GPU to read at the same time.

I precisely explained before that the SX memory works as a unified memory with two access buses, one slower and the other faster. But I also say that unified memory is not the holy grail and it all depends on the goal you seek.

I don´t need reat those forums because I read that many of them years ago.I have been playing consoles and PCs for more than 35 years and had my first PC when I was 6 years old.


About sfs...


8iCLlGG.jpg
It cannot access the slower bus for the CPU and faster bus for the GPU concurrently.

You haven't precisely explained anything. You asked others to explain to you why unified is better without giving any reason whatever that the alternative is better other than saying to trust the engineers. I trust that the engineers don't have an unlimited budget and have to make compromises.

Great. I have been playing on PCs and Consoles longer than you have then.

That does not answer my question. How does what James Stanard say there explain SFS benefits over the older but similar PRT? Unless you believe PRT is exactly what he is comparing it to. In that case, how much benefit is SFS over no texture streaming at all? 7-10x as efficient?
 

assurdum

Banned
Weird RAM/bandwith ? please, let me know the benefits to have unified memory with only with sigle bus for everything?

I don´t know your systems knowledge but not always unified memory with a single bus is a good option. Mostly because you can´t read from that bus from the CPU and GPU at same time without lose performance.

Xbox really doesn´t have divided memory, Xbox use two different access to this memory, one faster and another one slower, because for example the SO, audio and other functions don´t need the same speed than the GPU.

I don´t know why many people thinks that unified memory and only one bus is better. Really I don´t know in what they are supporting their argument.

Btw. Series X also has SFS and it means that it needs almost 3 times less memory to do the same than without this technology.
You know right series X has a single bus too? That's the catch and the double edge sword of such choice. When CPU uses the bandwidth bottlenecked the GPU because it has to run at the same speed: you can't use two speeds in the same bus at the same time.
 
Last edited:

jroc74

Phone reception is more important to me than human rights
If you want to find out more about why unified memory is sought after, all you have to do is read old developer interviews and technical discussions from the X360/PS3 era.
Exactly, I remember reading about the PS3 ram setup causing issues.

Why some ppl think this cant still be an issue now is amazing.

Both console manufacturers made compromises. They just made them in different areas. How is this so controversial?
 
Last edited:

Lysandros

Member
Yes likely, the video frame buffer memory won't be sticking around in cache, it's too big. So the GPU will be reading and writing every transparent pixel over the memory bus
Seems to be a very inefficient way of doing things, are we sure there isn't a less taxing solution for it? In RGT podcast Matt Hargett put emphasis on faster caches as a more cost effective solution to bandwidth problem than increasing RAM bus/frequency in context of a new switch. Thus i thought that bandwidth heavy operations such as transparencies should also benefit from faster caches.
 
Last edited:
It has the advantage that they can read from the same bus at the same time, but when doing it simultaneously, performance is greatly penalized. As long as the system allows the CPU and GPU to read at the same time.

I precisely explained before that the SX memory works as a unified memory with two access buses, one slower and the other faster. But I also say that unified memory is not the holy grail and it all depends on the goal you seek.

I don´t need reat those forums because I read that many of them years ago.I have been playing consoles and PCs for more than 35 years and had my first PC when I was 6 years old.


About sfs...


8iCLlGG.jpg
Does this SFS killer new feature come before or after the killer new power of the could feature? Just to set expectations really :)
 

huraga

Banned
It cannot access the slower bus for the CPU and faster bus for the GPU concurrently.

You haven't precisely explained anything. You asked others to explain to you why unified is better without giving any reason whatever that the alternative is better other than saying to trust the engineers. I trust that the engineers don't have an unlimited budget and have to make compromises.

Great. I have been playing on PCs and Consoles longer than you have then.

That does not answer my question. How does what James Stanard say there explain SFS benefits over the older but similar PRT? Unless you believe PRT is exactly what he is comparing it to. In that case, how much benefit is SFS over no texture streaming at all? 7-10x as efficient?
Why can´t access the CPU to the slow part and the GPU to the fast part at the same time?

Really do you think that the engineers of the second bigger company in the world has more limitations than a company that even it´s not in the top 10?

Please, explain the differences btw PRT and SFS.

About unified memory i was complaining about this because I don´t know why some people thinks that it´is the panacea. It is better depending of your goals.
 
Last edited:

huraga

Banned
Ps5 is a Fill Rate monster, the only other Sony Console that was considered a Fill rate monster was the Ps2.

dj khaled keys GIF by Music Choice
Fill rate is not all the same than tflops are not all.
Does this SFS killer new feature come before or after the killer new power of the could feature? Just to set expectations really :)
It comes from the same place than the power of ps3 cell, do you remember that? It was before the power of the cloud.
 

Lunatic_Gamer

Gold Member

Destroy All Humans! Update 1.08 Adds Support For 60 FPS On PS5, Improved Visuals On XSX


Black Forest Games has released a new update for Destroy All Humans! that has added a new FPS limit option and implemented some bug fixes.


  • PS4 patch to unlock FPS restrictions (this will help players on PS5 as well)
  • X1X patch for improved details
  • XSX patch for improved resolution and details

 

M1chl

Currently Gif and Meme Champion
Fill rate is not all the same than tflops are not all.

It comes from the same place than the power of ps3 cell, do you remember that? It was before the power of the cloud.
SFS sound like a nice feature, if that is going to work (on either console).
 

bitbydeath

Member


“Downporting” should be how developers approach cross-gen titles development going forward. We get our new shinny Next-Gen games with all the bells and whistles and last gen machines still get essentially the same game, albeit with compromises. Best of both worlds I would say. 😉

Why can’t the companies themselves come out with this sort of information early on to build hype?
 

roops67

Member
The GPU has 4MB of L2 cache, which should be enough for a few tiles. No?
To be honest, not sure, see below
Seems to be a very inefficient way of doing things, are we sure there isn't a less taxing solution for it? In RGT podcast Matt Hargett put emphasis on faster caches as a more cost effective solution to bandwidth problem than increasing RAM bus/frequency in context of a new switch. Thus i thought that bandwidth heavy operations such as transparencies should also benefit from faster caches.
Your right it sounds very inefficient. I don't know the technicalities of rendering the same transparent blended overlapping part of the screen at the same time instead of a separate pass so the GPU don't need to go lookup what it has written there earlier over the memory bus in the frame to alpha blend with, or can it do the transparency pass while that that part of the video memory is still resident in cache?? Maybe this is what they've worked out on the PS5 so transparencies ain't such a issue for it... I don't know really and my explanations are sounding gibberish now, anybody else want to chime in on how alpha blending is done? But let me refer you to a timestamped DF video by everyone's favourite PCMR knobhead Alex Bataglia...

Here notice that he had to search hard to find frame drops on the PS5, and strangely left XSX out of this direct comparison :messenger_winking_tongue: ! He is using alpha blended particle effects as a GPU bandwidth stress test inferring they get added in on a later pass within the same frame. So how I interpret it to apply the blending the GPU has to lookup what's it's blending that means read in pixel from frame buffer over memory bus, blend new overlapping pixel with it, write it back to frame buffer

I'd ignore his comparisons between PC and consoles, its fundamentally flawed full of shite
 
Last edited:
To be honest, not sure, see below
I was talking about this:


RDNA's biggest differentiator vs GCN is tiled rendering, which reduces memory bandwidth requirements a lot.

It's probably the reason you see less power consumption in XSX vs XB1X running BC games...
 

Godfavor

Member
It cannot access the slower bus for the CPU and faster bus for the GPU concurrently.

You haven't precisely explained anything. You asked others to explain to you why unified is better without giving any reason whatever that the alternative is better other than saying to trust the engineers. I trust that the engineers don't have an unlimited budget and have to make compromises.

Great. I have been playing on PCs and Consoles longer than you have then.

That does not answer my question. How does what James Stanard say there explain SFS benefits over the older but similar PRT? Unless you believe PRT is exactly what he is comparing it to. In that case, how much benefit is SFS over no texture streaming at all? 7-10x as efficient?

It is 2.5x more effecient than xbox one x which supports PRT.

As I understand it, SFS kills partial textures that will not be visible on screen before the IO starts streaming game textures into GPU RAM. (Or directly into the GPU that will use the SSD as RAM)

It is completely different than the approach that games utilize ram nowdays. Game engines load the whole texture into ram and with PRT only a partial of it loads into the screen, saving performance but not ram amount.
 
Last edited:

Rudius

Member

Destroy All Humans! Update 1.08 Adds Support For 60 FPS On PS5, Improved Visuals On XSX


Black Forest Games has released a new update for Destroy All Humans! that has added a new FPS limit option and implemented some bug fixes.


  • PS4 patch to unlock FPS restrictions (this will help players on PS5 as well)
  • X1X patch for improved details
  • XSX patch for improved resolution and details

One game that really needs a patch like that is the Mafia remake. I want to play that, but at 60fps it would be much better.
 

mitchman

Gold Member
I don´t think so. Usually the SDK has more features available than the Kernel can support, because is quite more complex for devs update the kernel than a SKD. Think that we are talking about a Windows Kernel, its very complex.
The kernel in the xbox has been stripped down significantly compared to Windows, so much it's not directly comparable.
Does this SFS killer new feature come before or after the killer new power of the could feature? Just to set expectations really :)
Didn't MS PR machine also talk about SFS before the Xbox 360 launch and how it would enable mega textures and yada yada? Pretty sure someone posted a video about just that on this forum recently, sounds like they pulled out the old PR playbook from the last successful xbox console for this generation too.
 
Last edited:

Boglin

Member
Why can´t access the CPU to the slow part and the GPU to the fast part at the same time?

Really do you think that the engineers of the second bigger company in the world has more limitations than a company that even it´s not in the top 10?

Please, explain the differences btw PRT and SFS.

About unified memory i was complaining about this because I don´t know why some people thinks that it´is the panacea. It is better depending of your goals.
The 320bit bus can only handle 560GB/s. The 560GB/s 10GB pool and 336GB/s 6GB pool could not run simultaneously because it would be 896GB/s.

I did not at all imply Microsoft is more limited than Sony, only that Microsoft had a budget and that means compromises. Sony made compromises too and I think the Xbox is the more powerful console.

Both SFS and PRT break down complete textures into tiles in order to save on memory bandwidth by not loading unnecessary portions of the texture. SFS is an evolution of PRT that maps the tiles into a dedicated cache making the process more efficient still.
SFS also applies a filter which smooths the transitions between various LOD of the the same texture.

The least you can do is give a counter example of a way separate pools are better. Developers want unified memory for a reason.
 
Last edited:

Boglin

Member
It is 2.5x more effecient than xbox one x which supports PRT.

As I understand it, SFS kills partial textures that will not be visible on screen before the IO starts streaming game textures into GPU RAM. (Or directly into the GPU that will use the SSD as RAM)

It is completely different than the approach that games utilize ram nowdays. Game engines load the whole texture into ram and with PRT only a partial of it loads into the screen, saving performance but not ram amount.
PRT on the Xbox One X and PS4 are both supported and at the same time both fairly useless because they are limited by the latencies of the HDD. They can't grab the textures in any reasonable time frame due to seek times of mechanical drives.

Yes, SFS has the textures mapped into cache so the developers don't have to "guess" which portions of textures to stream in.

The purpose of PRT is to load only a portion of a texture into memory just like SFS. PRT just isn't quite as refined as SFS.
I don't understand what you mean by only partially loading on screen. That has been the way it has been done for over 25 years otherwise textures would flicker and blend. It's not a PRT feature.
 

Godfavor

Member
PRT on the Xbox One X and PS4 are both supported and at the same time both fairly useless because they are limited by the latencies of the HDD. They can't grab the textures in any reasonable time frame due to seek times of mechanical drives.

Yes, SFS has the textures mapped into cache so the developers don't have to "guess" which portions of textures to stream in.

The purpose of PRT is to load only a portion of a texture into memory just like SFS. PRT just isn't quite as refined as SFS.
I don't understand what you mean by only partially loading on screen. That has been the way it has been done for over 25 years otherwise textures would flicker and blend. It's not a PRT feature.


Unless I've missed something, MS explained that they have observed how Xbox one X handles game ram and a big potion of ram remained unused because some textures were not required to load at all. That's why SFS will stream only the textures that will be needed to load on screen (and even scrap mips if the textured area is further away from the plsyer) directly from the ssd without the need to load the whole texture into ram and then used a portion of it on screen.

Thats were the "2.5x memory and bandwidth saves" came from.

I think that it was on series s promotional video of SFS.

Well there is pretty great explanation of how it can be used here:
 
Last edited:
Status
Not open for further replies.
Top Bottom