• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Sampler Feedback Streaming appears to be the real deal. Game Stack Live real-time demo impressions. (video to come soon)

MonarchJT

Banned
Its virtual texturing its been done before its now faster on series x io but it wont remove popin, popin will always depend on your io latencies and ssd bandwidth and sampler feedbakc wont solve this in comparison they are talking about 10gb of data in a small room here ue5 was streaming 100s of gb on a scene with 8k textures... i want to see how series x handles that since weve already seen popin in the medium a game thats using series x velocity architecture already.. this is all marketing talk weve heard the same talks since xbox 360. They called it mega tectures on x360 then they called it tilled resources on xbone partial resident textures on ps4 in hardware, and now sfs its the same crap only a bit faster its not the solution of memory, it just helps reduce popin
everything is wrong in this post . And since you are talking about different technologies with different hardware implementations, it is useless to comment on it.
.ps. seen that mods didn't like for i don't know what reason my post.. I tell you this way. If Epic released like always did the demo on PC (and who knows why they doesn't..lol) you would see how wrong you are on that too.
 
Last edited:
everything is wrong in this post . And since you are talking about different technologies with different hardware implementations, it is useless to comment on it.
.ps. seen that mods didn't like for i don't know what reason my post.. I tell you this way. If Epic released like always did the demo on PC (and who knows why they doesn't..lol) you would see how wrong you are on that too.
The demo wasnt done on pc stop the fudding!... they played a video of the demo on a laptop in a conference of epic games china so grow up and smell the coffee.. and sampler feedback, prt, mega textures are all virtual texturing its not a different technology infact microsoft calls sampler feedback prt+ Virtual texturing had always existed for decades.
 
Wonderful take, "it's the same but faster", so it's not the same then? The Medium is a last generation game, not a next generation engine.
The medium is a netgen game not lastgen it was only developed for series x and pc so i dont get what your saying here, and the developers said they are using velocity architecture on the engine and it still has popin, theres no ps5 game with popin not even 1
 

MonarchJT

Banned
The demo wasnt done on pc stop the fudding!... they played a video of the demo on a laptop in a conference of epic games china so grow up and smell the coffee.. and sampler feedback, prt, mega textures are all virtual texturing its not a different technology infact microsoft calls sampler feedback prt+ Virtual texturing had always existed for decades.
there is an official article from the always reliable PC GAMER that directly called an epic engineer asking about how the demo would run on pc. go to read it. If you think that a video run a 40fps and an engineer doesn't know the difference between running a video or a demo it makes me laugh . but I let you believe what you want
again, those tech are always evolution of each other ....the reality is that you just don't like xbox ...we know it you know it ...you doing this from decades.
Since the truth will only be known when the demo is released on PC .. let's close the topic here
 
Last edited:

Three

Member
Virtual Texturing and Sampler Feedback Streaming is a little different. Using Sampler Feedback to handle texture streaming is what's new. You could do virtual texturing before for many years, but that was all without Sampler Feedback functionality, at least no DirectX title had ever used it. And to my knowledge neither did any other major gaming platforms or hardware till Nvidia's Turing launch in late 2018. Sampler Feedback isn't just the same old thing we've always been using. It's quite new in what it makes possible.
Yes it does things a little differently but these are only minor differences in predicting tile residency and having a fallback I think. I'm talking about the concept shown in the video. The idea of using tiled resources to load different quality data based on the view. It doesn't show the difference to Granite in the video.
 
Last edited:

MonarchJT

Banned
The medium is a netgen game not lastgen it was only developed for series x and pc so i dont get what your saying here, and the developers said they are using velocity architecture on the engine and it still has popin, theres no ps5 game with popin not even 1
if you would have any knowledge of what you say you would know that many of the hardware features of the .xsx are still in beta even on devkits. The medium does not use sfs as no other game does
 

Jemm

Member
The medium is a netgen game not lastgen it was only developed for series x and pc so i dont get what your saying here, and the developers said they are using velocity architecture on the engine and it still has popin, theres no ps5 game with popin not even 1
There are several APIs and technologies in the XVA.

If a dev say that they are "using XVA", it doesn't necessarily mean they are using all the technologies (like SFS in this case). They could just utilize DirectStorage API, for all we know.
 

Major_Key

perm warning for starting troll/bait threads
https://twitter.com/JamesStanard/status/1250202122055864320

From the SFS Patent :

"Software-only residency map solutions typically perform two fetches of two different buffers in the shader, namely the residency map and the actual texture map. The primary PRT texture sample is dependent on the results of a residency map sample. These solutions are effective, but require considerable implementation changes to shader and application code, especially to perform filtering the residency map in order to mask unsightly transitions between levels of detail, and may have undesirable performance characteristics. The improvements herein can streamline the concept of a residency map and move the residency map into a hardware implementation."

SFS was designed to accelerate software virtual texturing.

Sampler Feedback Streaming

SFS is based on PRT+, and PRT+ is based on PRT&Sampler Feedback. SFS it’s a complete solution for texture streaming, containing both hardware and software optimizations.

Firstly, Microsoft built caches for the Residency Map and Request Map, and records the asset requests on the fly. The difference between this method and traditional PRT methods is kinda like, previously you have to check the map but now you have a gps.

Secondly, you need a fast SSD to use PRT+ and squeeze everything available in the RAM. You won’t want to use a HDD with PRT+, because when the asset request emerges, it has to be answered fast (within milliseconds!). The SSD on Xbox is now priotized for game asset streaming, to minimize latency to the last bit.

Thirdly, Microsoft implemented a new method for texture filtering and sharpening on hardware. This is used to smooth the loading transition from mip8 to mip4 or mip0…etc. It’s not magic, but it works like magic:

e292f820b011dd4479f51f024cc6ee5709d3d669.jpeg


As we have stated, the Sampler knows what it needs. The developer can answer the request of Mip 0 by giving Mip 0.8 on frame 1, Mip 0.4 on frame 2, and eventually Mip 0 on frame 3.

The fraction part is used on texture filtering, so that the filter can work as intended and present the smoothest transition between LOD changes.

It also allows the storage system to have more time to load assets without showing artifacts.

These hardware based optimizations, combined with PRT+, ultimately combined as what we know as Sampler Feedback Streaming. It’s potential is so wild, just like Mesh Shader and Ray Tracing.
 

RoadHazard

Gold Member
What's weird to me is that this wasn't already the standard way of doing things. Just seems like common sense to not load the parts of textures that are not needed, in the same way as objects outside the viewport are culled and not rendered. But I guess it requires hardware support that hasn't existed before?
 

DenchDeckard

Moderated wildly
I cant wait for unreal engine 5 games to launch on PC and for those high end pc's to run them all better than any console...just like they always have.

These new consoles are quite clearly known from a power stand point. pretty decent mid range GPU level and a mid range CPU. Plenty of tricks will be created to help aid game develiopment but pc will always run a game better if its programmed properly. To think otherwise is just madness.

Its obvious that a 2070 powered laptop with a superfast SSD could run the unreal engine demo. that demo was using DRS below 1440p at times and was 30 FPS. PC with a 3080 would eat it for breakfast at 30 FPS.
 
What's weird to me is that this wasn't already the standard way of doing things. Just seems like common sense to not load the parts of textures that are not needed, in the same way as objects outside the viewport are culled and not rendered. But I guess it requires hardware support that hasn't existed before?
This and feedback buffers to handle required mipmap levels have existed for years. There is nothing new here in any way shape or form. RAGE does this ffs!

 
Last edited:

RoadHazard

Gold Member
This and feedback buffers to handle required mipmap levels have existed for years. There is nothing new here in any way shape or form. RAGE does this ffs!


Yeah, I thought about megatexture too... But I guess the Series has hardware stuff to do it faster (beyond the SSD)? Otherwise, what are they making such a big deal over?
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
So the PS5 could do the same thing then, only even faster because of the much higher data throughput?
I think there are utilities that make it easier on the developer rather than the impossible becoming possible. Meaning that having the HW giving you efficient access to what has been used to render the current frame (the sampler feedback) might make creating efficient streaming prediction easier as you have fast access to data, they have a solution for blending across page boundaries and minimise streaming pop-in, and they added instructions to trigger the page faults you need to force the streaming system to fetch the data you are likely to need next, etc…

While some of this work requires a more effort on the dev side and a bit of extra burden on the GPU, PS5 does have a lot more bandwidth to play with and very low latency in accessing small little chunks of data on demand and urgently (6 priority levels instead of 2) too.
 

quest

Not Banned from OT
I think there are utilities that make it easier on the developer rather than the impossible becoming possible. Meaning that having the HW giving you efficient access to what has been used to render the current frame (the sampler feedback) might make creating efficient streaming prediction easier as you have fast access to data, they have a solution for blending across page boundaries and minimise streaming pop-in, and they added instructions to trigger the page faults you need to force the streaming system to fetch the data you are likely to need next, etc…

While some of this work requires a more effort on the dev side and a bit of extra burden on the GPU, PS5 does have a lot more bandwidth to play with and very low latency in accessing small little chunks of data on demand and urgently (6 priority levels instead of 2) too.
There is no point wasting time on the ps5 just brute force it unreal 5 demonstrated. Microsoft needed something that works for multiple vendors and platforms this is what they came up with. It should work on lots of PCs. Good luck finding many PCs to emulate the ps5. Apples to pears for this. They have a lot of the same goals in the end but chose much different routes to get there according to need.
 

3liteDragon

Member
Its obvious that a 2070 powered laptop with a superfast SSD could run the unreal engine demo. that demo was using DRS below 1440p at times and was 30 FPS. PC with a 3080 would eat it for breakfast at 30 FPS.
Don't wanna derail the thread but that was already debunked by Tim himself, it could still scale down to run on the latest gaming laptops/PC, but it just wouldn't run it at the exact same fidelity as the PS5 (though I don't know if anyone would be able to tell the difference even if they heavily drop the triangle count for PC). The final output of that demo was a full 4K image and the only reason people know it was running at 1440p@30FPS with DRS was because Epic told DF that, DF couldn't even tell if it was using TAA or any other techniques and just thought it was a clean native 4K image. And Epic have already said that there's still enough frame-time left to run the EXACT SAME demo at 60FPS on the PS5.

If you wanna talk more about this, PM me instead.


 
Last edited:

DenchDeckard

Moderated wildly
Don't wanna derail the thread but that was already debunked by Tim himself, it could still scale down to run on the latest gaming laptops/PC, but it just wouldn't run it at the exact same fidelity as the PS5 (though I don't know if anyone would be able to tell the difference even if they heavily drop the triangle count for PC). The final output of that demo was a full 4K image and the only reason people know it was running at 1440p@30FPS with DRS was because Epic told DF that, DF couldn't even tell if it was using TAA or any other techniques and just thought it was a clean native 4K image. And Epic have already said that there's still enough frame-time left to run the EXACT SAME demo at 60FPS on the PS5.

If you wanna talk more about this, PM me instead.




It doesn't matter what he said, I'm saying you would have to be crazy to think a modern pc with a decent ssd and gpu couldn't run that benchmark like a ps5.

But you're right no need to derail.

Also, don't believe a word Tim sweeney says. Just a hint ;).
 

Three

Member
What's weird to me is that this wasn't already the standard way of doing things. Just seems like common sense to not load the parts of textures that are not needed, in the same way as objects outside the viewport are culled and not rendered. But I guess it requires hardware support that hasn't existed before?
It's because HDDs were not fast enough for what devs are doing now with PRT+. It's the only reason. We have had this since the age of Rage's megatextures and it has been refined over the years.
 

MonarchJT

Banned
Don't wanna derail the thread but that was already debunked by Tim himself, it could still scale down to run on the latest gaming laptops/PC, but it just wouldn't run it at the exact same fidelity as the PS5 (though I don't know if anyone would be able to tell the difference even if they heavily drop the triangle count for PC). The final output of that demo was a full 4K image and the only reason people know it was running at 1440p@30FPS with DRS was because Epic told DF that, DF couldn't even tell if it was using TAA or any other techniques and just thought it was a clean native 4K image. And Epic have already said that there's still enough frame-time left to run the EXACT SAME demo at 60FPS on the PS5.

If you wanna talk more about this, PM me instead.



this is sweeney who ridicules himself and fortunately anyone with a q.i. higher than a monkey already knows
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
The UE5 demo did many things - and it is of course scalable engine across various platforms/ hardware! - but one of the things they demonstrated was the consistent use of 8K assets that was dynamically streamed to VRAM by the PS5 I/O complex and dedicated API. That is why they could just zoom in on the rocks in the beginning of the demo without any loss in texture quality.

Personally, I think the virtual increase in VRAM though streaming methodologies such as SFS and the I/O complex in the PS5 together with increased geometry complexity (primarily through better geometry culling methodologies such the GE API in the PS5 and mesh shaders on XSX/S) will have the highest impact on graphical fidelity this coming generation. Exciting!
So you they were actually rendering 8k assets to a 1440p image?

UE5 is all about taking insanely detailed assets, and rendering them at the pixel density of the actual display resolution. That does not involve even STORING the assets at the insane detail, let alone rendering them. It's done by the engine when building the game.

The high resolution model; with potentially 100s of millions of triangles.. has a high resolution texture applied to it. The engine then scales that asset using nanite to a more reasonable storage format. For the UE5 demo they scaled to 4k for storage; then at runtime things were actually rendered at as close to 1 pixel per triangle (already textured and the textures scaled) as possible.

The are doing far more than just "better geometry culling"; they are taking insanely high detailed models and scaling them.. then dynamically scaling them at runtime based on distance from the camera.

People really just don't grasp UE5; I'm not some super-expert.. but they are missing that the insane detail is imported into the ENGINE.. the insanely detailed models/textures aren't used at runtime.

There is zero REASON to use them.. the tech allows a 1440p presentation to look insanely detailed.. and it's actually incredibly memory EFFICIENT.
 
Last edited:

sendit

Member
What's weird to me is that this wasn't already the standard way of doing things. Just seems like common sense to not load the parts of textures that are not needed, in the same way as objects outside the viewport are culled and not rendered. But I guess it requires hardware support that hasn't existed before?

It wasn't the standard because not all platforms had fast IO. This includes standard SSD's (specifically speed at which it operates) and customized hardware pipelines to reduce time to where the data needs to be (video memory).
 

MonarchJT

Banned
So tell me, what if they showed the demo on XSX instead on PS5?
what do you mean? It is clear that Sony had a (almost certainly monetary) agreement with Sony to advertise the ps5. If Sweeney with Epic had a deal with Ms would have done the same with the consoles reversed, and in the same way it would have been obviously ridiculous. The story that sees one of the engineer creators of the demo confuse a video with the current demo and then name fps, resolution, the gpu and ssd model is so stupid that I don't understand as some may believe it. But you serious do you believe it?
 

Elog

Member
UE5 is all about taking insanely detailed assets, and rendering them at the pixel density of the actual display resolution. That does not involve even STORING the assets at the insane detail, let alone rendering them. It's done by the engine when building the game.

The high resolution model; with potentially 100s of millions of triangles.. has a high resolution texture applied to it. The engine then scales that asset using nanite to a more reasonable storage format. For the UE5 demo they scaled to 4k for storage; then at runtime things were actually rendered at as close to 1 pixel per triangle (already textured and the textures scaled) as possible.
You are right in that there are two different steps: One is what models you can import into your work when developing your game - both source textures and source geometry and that the engine can do a lot of the scaling for you instead of having to work with LODs etc.

However, you are missing the point of streaming though. In the demo they also streamed the asset straight to the screen at the quality required. That is why there was no loss of detail as the camera zoomed in on the rocks early on. This requires asset streaming to and from your storage device to work that well /due to VRAM limitations) and that is where the PS5 I/O complex comes in. Of course the UE5 will scale to devices with no SSD and no blazing fast I/O piece but then you will see loss of detail as you move closer to objects. Next-generation consoles will be at an advantage when it comes to this over PC for the foreseeable future. And this is the key point of SFS and the PS5 I/O complex.

The are doing far more than just "better geometry culling"; they are taking insanely high detailed models and scaling them.. then dynamically scaling them at runtime based on distance from the camera.
Not sure why you are so hostile - chill!

I have not claimed that is the only thing they do - where did I do that? The point is that the key behind getting all that geometry through the rendering pipeline without killing your GPU is culling. In the UE5 they called it loss-less culling which means that they probably conduct some sort of pre-rasterization raster to determine what geometry to never put through your GPU in the first place, i.e. culling. The reason for this is that if you have a given amount of data that you push through the GPU it is only if you send it as one piece of data (i.e. as one polygon) that you are close to your silicon's theoretical peak compute power. If you divide the same amount of data into millions of small pieces, the loss in compute efficiency grows exponentially with the number of pieces you divide in into (i.e. number of polygons or in other words geometrical complexity). At some point your GPU basically grinds to a halt. This is currently setting a hard limit on geometrical complexity in engines.
 
Don't wanna derail the thread but that was already debunked by Tim himself, it could still scale down to run on the latest gaming laptops/PC, but it just wouldn't run it at the exact same fidelity as the PS5 (though I don't know if anyone would be able to tell the difference even if they heavily drop the triangle count for PC). The final output of that demo was a full 4K image and the only reason people know it was running at 1440p@30FPS with DRS was because Epic told DF that, DF couldn't even tell if it was using TAA or any other techniques and just thought it was a clean native 4K image. And Epic have already said that there's still enough frame-time left to run the EXACT SAME demo at 60FPS on the PS5.

If you wanna talk more about this, PM me instead.



This isn't a debunk, this is Tim Sweeney shitposting (as usual) and doing damage control. He's implying that the Epic China lead engineer doesn't know the difference between running a demo and showing a video of it. Peak cringe.
 
Yep, it's great.

Also got the confirmation on Game Stack Live regarding hardware support for the PC gang here.

It is supported on all DX12 GPUs and PCs plus laptops having an NVMe drive.

For the best experience, a DX12 Ultimate GPU is recommended (Turing, Ampere, RDNA2), this will enable SFS similar to the Xbox consoles.

Then it doesn't require dedicated hardware.
 

ToTTenTranz

Banned
Isn't Sampler Feedback Streaming basically a (software?) implementation similar to what the Cache Scrubbers are doing in the PS5?



This isn't a debunk, this is Tim Sweeney shitposting (as usual) and doing damage control. He's implying that the Epic China lead engineer doesn't know the difference between running a demo and showing a video of it. Peak cringe.
Not as cringe as believing some random chinese<->english translation of one guy talking in a random chinese show over the words of Epic's CEO that also doubles as the original creator of the Unreal Engine.
 

Dampf

Member
Then it doesn't require dedicated hardware.
No, it doesn't.

Sampler Feedback is the one requiring specific hardware, which is why it is only supported on DX12 Ultimate GPUs. DirectStorage will work without Sampler Feedback, but of course combined with Sampler Feedback it will turn into a real game changer.

CmdOzMx.png


Memory for traditional MIP streaming = current/cross gen title without using DirectStorage or Sampler Feedback. Basically any game available right now.

Memory for XVA without SFS= next gen game with DirectStorage but without Sampler Feedback (so any PC with NVMe and Pascal, GCN or RDNA1 GPU)

Memory for XVA with SFS= next gen game with DirectStorage and Sampler Feedback (PC with NVMe and Turing, Ampere or RDNA2 GPU as well as the Xbox consoles)
 
Last edited:

ToTTenTranz

Banned
Nothing about it was random. The Epic China lead engineer said what he said, and Tim Sweeney came in and did some comical damage control.

What's comical is how anyone can look at the stream and see the progress bar in the demo's video.



The laptop is running a video, as clarified by Epic's CEO.
Even if the guy in question is "Epic China's lead developer" (that part was probably made up), and whomever made the chinese-english translation didn't make a mistake (or didn't make that part up), it obviously doesn't have more weight than Tim Sweeney's words.
 

MonarchJT

Banned
Isn't Sampler Feedback Streaming basically a (software?) implementation similar to what the Cache Scrubbers are doing in the PS5?




Not as cringe as believing some random chinese<->english translation of one guy talking in a random chinese show over the words of Epic's CEO that also doubles as the original creator of the Unreal Engine.
just please ...everyone know sweeney was in damage control. lol
 

MonarchJT

Banned
What's comical is how anyone can look at the stream and see the progress bar in the demo's video.



The laptop is running a video, as clarified by Epic's CEO.
Even if the guy in question is "Epic China's lead developer" (that part was probably made up), and whomever made the chinese-english translation didn't make a mistake (or didn't make that part up), it obviously doesn't have more weight than Tim Sweeney's words.

there's another official pc gamer interview asking about demo perfomance on pc.
 

nemiroff

Gold Member
What's comical is how anyone can look at the stream and see the progress bar in the demo's video.



The laptop is running a video, as clarified by Epic's CEO.
Even if the guy in question is "Epic China's lead developer" (that part was probably made up), and whomever made the chinese-english translation didn't make a mistake (or didn't make that part up), it obviously doesn't have more weight than Tim Sweeney's words.


Nobody in that interview suggested or hinted that it was running on the laptop in that studio, that's a smoke screen by Sweeney and you're gullible to believe it. Why don't you just read the translation. It wasn't a "Look! UE5 is running on a laptop!" demo/interview, these engineers was handed a question about UE5 requirements during a general talk about UE5 and they were talking about their experience while the host was running the video in the background. They were proud of their new tech being able to run on hardware available on the market, that's all there is to it.

What's actually comical is the suggestion that a handful of engineers from Epic in China would conspire to falsify facts in a public video recorded event for no plausible nor common sense reason. I can imagine what they were feeling when they honestly did what they thought was the right thing for Epic and UE5 but got bitch-slapped and muzzled by their own bosses just because Sweeney is in bed with Sony.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
jesus christ.. this again.. NOBODY in that interview said it was running on the laptop in that studio, that's a smoke screen by Sweeney and you're gullible to believe it. Why don't you just read the translation. It wasn't a "UE5 demo on a laptop", these engineers was handed a specific question about UE5 requirements during a general talk about UE5 and they were talking about their experience while the host was running the video in the background.
Sure, again with the Epic China argument I see :sleep:.
 
Nobody in that interview suggested or hinted that it was running on the laptop in that studio, that's a smoke screen by Sweeney and you're gullible to believe it. Why don't you just read the translation. It wasn't a "Look! UE5 is running on a laptop!" demo/interview, these engineers was handed a question about UE5 requirements during a general talk about UE5 and they were talking about their experience while the host was running the video in the background. They were proud of their new tech being able to run on hardware available on the market, that's all there is to it.

What's actually comical is the suggestion that a handful of engineers from Epic in China would conspire to falsify facts in a public video recorded event for no plausible nor common sense reason. I can imagine what they were feeling when they honestly did what they thought was the right thing for Epic and UE5 but got bitch-slapped and muzzled by their own bosses just because Sweeney is in bed with Sony.

VbayCKW.jpg
 

Allandor

Member
What's weird to me is that this wasn't already the standard way of doing things. Just seems like common sense to not load the parts of textures that are not needed, in the same way as objects outside the viewport are culled and not rendered. But I guess it requires hardware support that hasn't existed before?
Just like the good old PRT, it did not make sense to use with HDDs. Because you only want to read the stuff from the storage, that is needed right now (or a frame later). If you would throw all those small read-operations on a HDD, the bandwidth would collapse. So everything is loaded in more or less big packages to optimize for bandwidth-usage. Most of the data you read would not be needed in a few frames, but it might get handy. But almost 90% of the stuff loaded is never read. That problem is now gone with SSDs. So they could optimize to read only parts that are really necessary.
Sony overcame this with the raw bandwidth speed of their SSD solution. MS went the other way, they took a fast SSD solution and tried to optimize more what really is needed to load. That is what e.g. SFS and mesh-shaders are all about. Just to optimize the stuff that get's loaded.
Btw, "just" using PRT has it's flaws, and as far as I understand it, those are gone with SF and with SFS there is a bit more hardware-acceleration in to, so things do not only have a lower bandwidth and memory footprint, but have also less an impact on performance.
 

IntentionalPun

Ask me about my wife's perfect butthole
You are right in that there are two different steps: One is what models you can import into your work when developing your game - both source textures and source geometry and that the engine can do a lot of the scaling for you instead of having to work with LODs etc.

However, you are missing the point of streaming though. In the demo they also streamed the asset straight to the screen at the quality required. That is why there was no loss of detail as the camera zoomed in on the rocks early on. This requires asset streaming to and from your storage device to work that well /due to VRAM limitations) and that is where the PS5 I/O complex comes in. Of course the UE5 will scale to devices with no SSD and no blazing fast I/O piece but then you will see loss of detail as you move closer to objects. Next-generation consoles will be at an advantage when it comes to this over PC for the foreseeable future. And this is the key point of SFS and the PS5 I/O complex.


Not sure why you are so hostile - chill!

I have not claimed that is the only thing they do - where did I do that? The point is that the key behind getting all that geometry through the rendering pipeline without killing your GPU is culling. In the UE5 they called it loss-less culling which means that they probably conduct some sort of pre-rasterization raster to determine what geometry to never put through your GPU in the first place, i.e. culling. The reason for this is that if you have a given amount of data that you push through the GPU it is only if you send it as one piece of data (i.e. as one polygon) that you are close to your silicon's theoretical peak compute power. If you divide the same amount of data into millions of small pieces, the loss in compute efficiency grows exponentially with the number of pieces you divide in into (i.e. number of polygons or in other words geometrical complexity). At some point your GPU basically grinds to a halt. This is currently setting a hard limit on geometrical complexity in engines.

Well my point was it is not streaming 8k assets.. you repeated that multiple times.

It's also not "lossless geometry culling." They are doing lossless scaling, not culling. Lossless culling is.. just redundant. If you culled something visible in order to lose display detail.. that would be a major bug. All culling is designed to be lossless (from a displayed image perspective.)

And the key is not.. culling.. it's scaling. Of course the engine is doing culling, but what UE5 does is scale assets that would be impossible to render at runtime. That's what I've been getting at, and I'm not trying to be hostile.. you just keep repeating something that isn't really true.

Yes, it also was using a very small streaming buffer, and that's made possible by fast SSD (small buffer means it is depending on re-filling it fast), but.. again.. that is not involving streaming actual 8k assets into and out of memory.. there is no 8k detail whatsoever in the demo.. there is no reason too. The "lossless" aspect of UE5 is in the display of the source imagery at just above the pixel density of the output resolution. There is no loss of visible quality, because presenting a 1 billion poly model w/ dozens of 8k assets at 1440p would look the same as presenting that same model scaled perfectly to 3 million triangles w/ those dozens of 8k assets scaled down to that same pixel density.

And it's doing culling on top of that; but I don't believe they ever even said they had any real advancements in that area? They certainly don't on their site, that focuses on the lossless scaling:

 
Last edited:

ToTTenTranz

Banned
Nobody in that interview suggested or hinted that it was running on the laptop in that studio, that's a smoke screen by Sweeney and you're gullible to believe it. Why don't you just read the translation. It wasn't a "Look! UE5 is running on a laptop!" demo/interview, these engineers was handed a question about UE5 requirements during a general talk about UE5 and they were talking about their experience while the host was running the video in the background. They were proud of their new tech being able to run on hardware available on the market, that's all there is to it.

What's actually comical is the suggestion that a handful of engineers from Epic in China would conspire to falsify facts in a public video recorded event for no plausible nor common sense reason. I can imagine what they were feeling when they honestly did what they thought was the right thing for Epic and UE5 but got bitch-slapped and muzzled by their own bosses just because Sweeney is in bed with Sony.
What are your credentials for being able to accurately translate what they said?
Are they speaking in Cantonese or Mandarin?


Funny how the goalposts keep moving.
"It was shown running in a laptop" -> but they showed a video -> "They showed a video but in reality he said it could run in his laptop at work, he just didn't show it running live in a stream dedicated to showing off UE5 because reasons.".

Next step is saying the laptop with a RTX2080 Max-Q is capable of running the game through the power of the cloud.
 
Last edited:

Lethal01

Member
The laptop is running a video, as clarified by Epic's CEO.

Yes, the laptop is running a video of the demo while they are talking about how the actual demo runs on pc/laptop.

Sweeny thought they were literally talking about how the video of the demo runs on their laptop... which would be silly. So it's clear he's the one that was confused.
 

Elog

Member
Well my point was it is not streaming 8k assets.. you repeated that multiple times.

It's also not "lossless geometry culling." They are doing lossless scaling, not culling. Lossless culling is.. just redundant. If you culled something visible in order to lose display detail.. that would be a major bug. All culling is designed to be lossless (from a displayed image perspective.)

And the key is not.. culling.. it's scaling. Of course the engine is doing culling, but what UE5 does is scale assets that would be impossible to render at runtime. That's what I've been getting at, and I'm not trying to be hostile.. you just keep repeating something that isn't really true.

Yes, it also was using a very small streaming buffer, and that's made possible by fast SSD (small buffer means it is depending on re-filling it fast), but.. again.. that is not involving streaming actual 8k assets into and out of memory.. there is no 8k detail whatsoever in the demo.. there is no reason too. The "lossless" aspect of UE5 is in the display of the source imagery at just above the pixel density of the output resolution. There is no loss of visible quality, because presenting a 1 billion poly model w/ dozens of 8k assets at 1440p would look the same as presenting that same model scaled perfectly to 3 million triangles w/ those dozens of 8k assets scaled down to that same pixel density.

And it's doing culling on top of that; but I don't believe they ever even said they had any real advancements in that area? They certainly don't on their site, that focuses on the lossless scaling:

We might define culling differently? If the number of triangles that the GPU needs to process for a given scene is reduced, culling is occuring. There are multiple ways of achieving this such as off-face culling but also if several triangles are reduced to a single pixel in the final rasterized image (which is one of the primary ways that EU5 seems to do polygon reduction) all those triangles do not need to be processed, i.e. culling. This allows the use of raw complex source geometry which is great for development but also increases the polygon count of objects that you move close to.

They used 8K source data on that SSD, for example the statue consisted of 24 8K assets. The assets are then streamed into VRAM on a per view basis and in a compressed format in VRAM (they did not state the compression ratio except that it was less than a fully compressed file on your storage device so I would assume 3x or something like that) was 768MB or in other words a single view used roughly 2 GB of uncompressed assets to render a frame. Since the compression is better on your storage device it probably means that around 0.5GB of compressed data needs to be read from storage, uncompressed, sorted and moved to VRAM in parts of a second - and that probably represents something in the vincinity of 100 individual files (really roughly - but it is not 10 and not 1000 files). That requires SFS or the PS5 I/O and an SSD to achieve.

That last sentence was my whole point in this thread - not sure anymore if we disagree or agree tbh. I am confused...
 
Last edited:
What's comical is how anyone can look at the stream and see the progress bar in the demo's video.



The laptop is running a video, as clarified by Epic's CEO.
Even if the guy in question is "Epic China's lead developer" (that part was probably made up), and whomever made the chinese-english translation didn't make a mistake (or didn't make that part up), it obviously doesn't have more weight than Tim Sweeney's words.

You fell for Sweeney's damage control. The Epic China engineer never claimed he's running the demo live. He showed the demo off in the video and talked about how it ran back at the office. Sweeney is the one who purposefully misunderstood what was happening in order to do some weak ass damage control.
 
Top Bottom