I don't get why you would link to a video by someone that is just giving their take on the same info we have direct from Epic, and getting things wrong (4 triangles per pixel, not 1) and in the comments for that video, their response to a question starts with "I think...", so they are just speculating like the rest of us.
People keep mentioning meshes of polygons for nanite, but I'm pretty sure Epic don't use those words when referring to nanite rendering, and specifically say "triangle" - and only use those other words when mentioning the content in the context of the creation tools/pipeline, prior to the assets being brought into the demo and rendered by nanite.
I'm more inclined to go with user onesvenus' take on things that nanite uses signed data field volumetric rendering - where geometry has a tiny memory footprint, because it describes the geometry perfectly by procedurally adding mathematical functions together to represent the geometry with infinite precision, and then rendered via a fragment shader - not a vertex shader pass needed for polygon primitives rendering.
As for the 768MB streaming pool, don't you find it odd that it is 3/4 of a GB, exactly - and not some other varied size to fit the UE5 assets optimally?
Now that I've clocked the specific number, it looks decidedly like (a multiply of) the physical memory size for the eSRAM memory in the IO complex, which could be why Epic are able to provide that number, and still not reveal NDA hardware specs of the PS5.
While trying to google specs/costs of esram, to work out what might be in the IO complex, I stumbled on this article below
2020 Review of Intel's (2015) Broadwell CPU with 128MiB of esram
Based on that info, I doubt 768MB(512MB + 256MB) is in budget of the PS5 BoM, but I do suspect it has 128MB + 64MB (or at least 64MB + 32MB), and the 768MB streaming pool is a multiple of the physical buffer because of the 33ms frame times - whereas at 60Hz rendering I would speculate the effective pool would be 384MB.
My logic is that the UE5 demo is to show off REYES, so it needs to be exhausting the available RAM or available bandwidth/latency by the next frame, or Epic have failed to make the demo look as good as it could - at the reveal - which I don't believe is the case, because the demo literally looks unreal IMHO.
IMO, it would be impossible for the 768MB to be compressed geometry data, as you think, because you would need to store that 768MB in (unified) RAM, and what is the point in storing compressed data in RAM? Especially when all compressed data needed by UE5 demo should only be arriving in RAM as needed in an uncompressed state - because it was retrieved from the SSD by the IO complex, which has the dedicated task of decompressing data to save CPU/GPU doing such a compute hungry task, that they can't do at low latency.