If i understand your question correctly, than i'm not sure. I would imagine that any current game engine could seek a piece of data from the SSD as many times as it wants but i am by no means an expert. I was trying to illustrate a scenario in which the size of the data on disk is not reflective of the amount of data which is eventually rendered on screen foregoing the obvious use of compression to accommodate this. Ultimately i wanted to highlight that the next gen consoles may indeed need 5.5GBs of raw speeds from the SSD at any given time.
Not only any engine: any game will read the data at ~8-9GB/s from the PS5 SSD. Not 5.5GBs.
Because all the data in the SSD it's compressed and it gets decompressed in realtime, copying to the RAM ~8-9GB/s (and up to 22GB/s is particularly well compressed, according to Cerny but I assume this may be a theorical only number) of data. 5.5GB/s is the raw speed before decompression is applied.
Came across this post from Shifty Geezer on Beyond3D
Bolded is my own emphasis.
So apparently having a plethora of objects on-screen is more an issue of if the hardware can render them in mass duplicate on the CPU and GPU end, rather than needing instances of each object in physical memory. Of course, for completely different models and objects you'd need an instance in physical memory but if you're talking about variants of existing models it would probably be possible to just keep the specific variants as their own instances in memory and then process them with the missing portions at render time for output to the display.
Just thought this was interesting to share because think some people are under the impression the SSDs are enabling hundreds of GBs of unique data to be streamed into the system as if each instance needs a virtualized instance in memory, but in reality if it's just multiple output instances of the same object or even somewhat varied objects you only need a virtualized instance of that object a single time in memory (and preferably main memory).
Yes, in examples like that pile of bodies that are multiple instances of the same object, in the memory there is only 1 body model/textures/shaders/whatever all these bodies share. Then for each distance, each body, there are the things that make them diffrent, which are mostly only a few numbers. In that case, mostly only the rotation and position coordinates. In a real game they'd a few more numbers like their health, id number of the current animation and frame, etc.
All these numbers that are repeated for each instance take a very tiny amount of memory. The model/animations/sounds/etc (specially the textures) are only once in memory need comparatively a bigger amount of memory, and this is why in games they tend to repeat stuff.
Then for each time you draw something, all this data is taken from the memory to place everything needed in what the camera will be seeing in the GPU, and to optimize the work done by the GPU, you do some things to cull/remove the polygons of these objects that aren't drawn because they are aren't seen by the camera at that moment.
The thing is, in a traditional case with previous technology, it's the programmer who makes that culling. In these new consoles (and I assume RDNA 2.0 PC GPUs too) there's specific hardware that is able to make different things like that culling for when drawing, which means there's more room and horsepower there to do other things. I assume that the console also uses that same hardware to reduce the detail of the model to the maximum detail that the native resolution shows (to reduce the triangles that are smaller than pixels).
In addition to that, I think that these original massive assets shown in the demo aren't stored in their original amount of detail in the console SSD and disk. I assume that when UE5 exports to the console, it stores there the maximum detail that it calculated that the console will be able to show at the target resolution and FPS considering the hardware that the console has when getting as close as possible to each object. With the idea of reducing the game install size and optimize memory space, reduce GPU work optimizing and drawing, etc.
And well, I think none of these things will be exclusive to UE5. I think the other next gen engines -specially the 1st party ones- will do very similar, or sometimes even better, for their next gen iterations.