• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X’s BCPack Texture Compression Technique 'might be' better than the PS5’s Kraken

M1chl

Currently Gif and Meme Champion
Hardly baseless. We have a demo of it in action in a scene with over 10GB of highly detailed textures, a number. Now we just need to get some amazing games using this tech.
Yeah, games talk, not impressive demos. I have yet to see PS5 games matching the UE5 demo. Yet that was discussed as to why xbox will suck this gen. I don't think I like discussion, which has to come out as the other HW needs to fail.
 

PaintTinJr

Member
No i see many people in here to think for some reason it's just 768MB for everything. it's just for geometry(nanite memory).

yes streaming is for current view aka screen space but it doesn't need to stream all 768MB of mesh pool every frame.

it's far from true PS5 SSD or any SSD couldn't handle such speeds actuall streaming depends on camera movement so by frame basis it would be some fraction of 768MB streaming pool actually streamed to RAM.

it's not correct there's a lot of triangles in UE5 demo basically 768MB of pool is total size of 20 millions of triangles compressed.

i see many misunderstood that slide i don't know why? it literally says nanite memory meaning geometry, but you right in a sense that SSD isn't fully utilized with UE5 demo( i think it gets closer to heavy utilization at flying part of the demo)
i hope once and for this will make it clear for some of you.

I don't get why you would link to a video by someone that is just giving their take on the same info we have direct from Epic, and getting things wrong (4 triangles per pixel, not 1) and in the comments for that video, their response to a question starts with "I think...", so they are just speculating like the rest of us.

People keep mentioning meshes of polygons for nanite, but I'm pretty sure Epic don't use those words when referring to nanite rendering, and specifically say "triangle" - and only use those other words when mentioning the content in the context of the creation tools/pipeline, prior to the assets being brought into the demo and rendered by nanite.

I'm more inclined to go with user onesvenus' take on things that nanite uses signed data field volumetric rendering - where geometry has a tiny memory footprint, because it describes the geometry perfectly by procedurally adding mathematical functions together to represent the geometry with infinite precision, and then rendered via a fragment shader - not a vertex shader pass needed for polygon primitives rendering.

As for the 768MB streaming pool, don't you find it odd that it is 3/4 of a GB, exactly - and not some other varied size to fit the UE5 assets optimally?
Now that I've clocked the specific number, it looks decidedly like (a multiply of) the physical memory size for the eSRAM memory in the IO complex, which could be why Epic are able to provide that number, and still not reveal NDA hardware specs of the PS5.

While trying to google specs/costs of esram, to work out what might be in the IO complex, I stumbled on this article below

2020 Review of Intel's (2015) Broadwell CPU with 128MiB of esram

Based on that info, I doubt 768MB(512MB + 256MB) is in budget of the PS5 BoM, but I do suspect it has 128MB + 64MB (or at least 64MB + 32MB), and the 768MB streaming pool is a multiple of the physical buffer because of the 33ms frame times - whereas at 60Hz rendering I would speculate the effective pool would be 384MB.

My logic is that the UE5 demo is to show off REYES, so it needs to be exhausting the available RAM or available bandwidth/latency by the next frame, or Epic have failed to make the demo look as good as it could - at the reveal - which I don't believe is the case, because the demo literally looks unreal IMHO.

IMO, it would be impossible for the 768MB to be compressed geometry data, as you think, because you would need to store that 768MB in (unified) RAM, and what is the point in storing compressed data in RAM? Especially when all compressed data needed by UE5 demo should only be arriving in RAM as needed in an uncompressed state - because it was retrieved from the SSD by the IO complex, which has the dedicated task of decompressing data to save CPU/GPU doing such a compute hungry task, that they can't do at low latency.
 
Last edited:
Yeah, games talk, not impressive demos.
Jack Nicholson Deal With It GIF


Pretty much. I always look at demos at what might be possible under controlled conditions but the actual games are what proves what is achievable under normal conditions. Case in point.



Definitely was possible on the PS3 but not exactly applicable to an open world Spiderman game on the console. We had to skip a gen to start to get even close to that or surpass it.

octupus-pic-890x561.png
 
Last edited:

Stooky

Member
I don't really see it that way,

Sony and Microsoft put out realistic numbers of what their SSD can transfer both raw and if compressed.
Microsoft outlined how efficient their system is at transferring only the texture data that is needed, Sony did not.
Both systems target different problems with I/O. Microsofts solution works best for PCs with infinite hardware configurations. Many users are going to benefit from this. PS5 solution works great for ps5 devs, they can design exotic customized hardware solutions, they dont have to consider pc user configurations, they can fully focus on PS5. Microsoft I/O is designed around handling large amounts texture data using current ssd hardware on the market. Its necessary with lower end bandwidth on the SSD for streaming large textures which takes up alot of bandwidth, while also needing to stream animations, audio, general loading, etc. Hopefully the software is as good as it they say it is. I bet Sony devs will have there own solution as well because moving around this large amount data and ram is a universal problem.
 

Lethal01

Member
Both systems target different problems with I/O.
This is what I keep hearing and it could be true but everything I hear points to Sony's solution mostly targeting the same problem that Microsoft has targeted with SFS and we simply don't have hard numbers on the results. Maybe in a year we find out they did but right now we just don't got info that points to that being the case. Like Senjutsu said, not need to compare it to Sony's solution, what we do know is it's a big step up from last gen.

I bet Sony devs will have there own solution as well because moving around this large amount data and ram is a universal problem.

A universal solution that has third party solution including one that seems to be showcased on the biggest game engine out Unreal Engine.

Just seems like this all comes back to Microsoft being the first to put a number on how much they expect a SSD paired with PRT to benefit them when you may be able to just as easily slap the multiplier from using Granite PRT and go "We are using memory 4x more efficiently effectively giving us 44~88Gb/s
 
Last edited:

Corndog

Banned
The PS5 I/O architecture is a chain of technologies to maximize the data throughput. I don't think Microsoft would have a magic bullet to counter that with software. But we need to see how this advantage will translate into games. Xbox has advantages in GPU grunt and memory bandwidth. In the end both consoles excel in different aspects and there will be engines that likes one over the other, to me, they are very similar. What Microsoft need is to bring competition for the Sony Worldwide Studios. It would be a shame if the Xbox advantages translate only in slightly better resolution or a little better performance that we need Digital Foundry to look closely to tell us the difference. It's Microsoft studios job to fully utilize the console potential.
Microsoft’s solution isn’t software it is hardware just like Sony’s.

Edit: I agree with the rest of what you are saying.
 
Last edited:

Corndog

Banned
I don't get why you would link to a video by someone that is just giving their take on the same info we have direct from Epic, and getting things wrong (4 triangles per pixel, not 1) and in the comments for that video, their response to a question starts with "I think...", so they are just speculating like the rest of us.

People keep mentioning meshes of polygons for nanite, but I'm pretty sure Epic don't use those words when referring to nanite rendering, and specifically say "triangle" - and only use those other words when mentioning the content in the context of the creation tools/pipeline, prior to the assets being brought into the demo and rendered by nanite.

I'm more inclined to go with user onesvenus' take on things that nanite uses signed data field volumetric rendering - where geometry has a tiny memory footprint, because it describes the geometry perfectly by procedurally adding mathematical functions together to represent the geometry with infinite precision, and then rendered via a fragment shader - not a vertex shader pass needed for polygon primitives rendering.

As for the 768MB streaming pool, don't you find it odd that it is 3/4 of a GB, exactly - and not some other varied size to fit the UE5 assets optimally?
Now that I've clocked the specific number, it looks decidedly like (a multiply of) the physical memory size for the eSRAM memory in the IO complex, which could be why Epic are able to provide that number, and still not reveal NDA hardware specs of the PS5.

While trying to google specs/costs of esram, to work out what might be in the IO complex, I stumbled on this article below

2020 Review of Intel's (2015) Broadwell CPU with 128MiB of esram

Based on that info, I doubt 768MB(512MB + 256MB) is in budget of the PS5 BoM, but I do suspect it has 128MB + 64MB (or at least 64MB + 32MB), and the 768MB streaming pool is a multiple of the physical buffer because of the 33ms frame times - whereas at 60Hz rendering I would speculate the effective pool would be 384MB.

My logic is that the UE5 demo is to show off REYES, so it needs to be exhausting the available RAM or available bandwidth/latency by the next frame, or Epic have failed to make the demo look as good as it could - at the reveal - which I don't believe is the case, because the demo literally looks unreal IMHO.

IMO, it would be impossible for the 768MB to be compressed geometry data, as you think, because you would need to store that 768MB in (unified) RAM, and what is the point in storing compressed data in RAM? Especially when all compressed data needed by UE5 demo should only be arriving in RAM as needed in an uncompressed state - because it was retrieved from the SSD by the IO complex, which has the dedicated task of decompressing data to save CPU/GPU doing such a compute hungry task, that they can't do at low latency.
Why would the use mathematical functions to represent geometry? That would be very computationally expensive and defeats the purpose of fast io. Why use an algorithm when it is faster to just load in the data? Might be good for renderman but not real-time graphics.
I havent programmed in a long while but I know in the distant past look up tables were way faster then actual computation. Many functions were precalculated and stored in indexed arrays for quick lookup.

Let me know where I am going wrong here? Keep it simple for me though.

edit: why not have a geometry lod data type. As you zoom in you zoom in on that portion of the geometry at a higher degree of accuracy. I have no idea how it would actually be stored. Maybe that’s what epic has done. Also why I don’t believe their pr about billions of vertices.
 
Last edited:

PaintTinJr

Member
Why would the use mathematical functions to represent geometry? That would be very computationally expensive and defeats the purpose of fast io. Why use an algorithm when it is faster to just load in the data? Might be good for renderman but not real-time graphics.
I havent programmed in a long while but I know in the distant past look up tables were way faster then actual computation. Many functions were precalculated and stored in indexed arrays for quick lookup.

Let me know where I am going wrong here? Keep it simple for me though.

edit: why not have a geometry lod data type. As you zoom in you zoom in on that portion of the geometry at a higher degree of accuracy. I have no idea how it would actually be stored. Maybe that’s what epic has done. Also why I don’t believe their pr about billions of vertices.
After user onesvenus put me on to Signed distance fields I looked into it, and did a post in the next-gen thread (my third last post in the thread IIRC) with links to a website with a primer on the subject.

The long and short of it is, that you can do amazing things with as little as 4KB, the examples page of the author's demo scenes over nearly 15years are pretty compelling IMO. And the ability to mathematically define objects and Ray March render them allows for perfectly tessellated geometry with infinite detail and makes huge savings in terms of eliminating hidden surface removal computation as scenes get more complex, because the union or intersection combinations of SDFs mathematically eliminate drawing surfaces that aren't visible, and as far as I understand, the SDFs would also reduce lighting calculations - in Lumen, compared to say rasterized GI techniques - because the mathematical definitions of the geometry make it more effective and efficient but not cheap. In theory, the rendering redundancy savings would also extend to texture mapping too.

If this was how nanite works, then the 4 sampler locations per pixel are known at the end of the nanite's pass, and leaves +27ms for the IO complex to retrieve the specific texels and shade before the next frame (@1440p30)

Because SDFs define their geometry mathematically, I suspect excellent factorization can enable enormous optimisation, where rasterization is largely brute forcing the fragment shader part. And with SDFs being a fragment shader operation, it would mean 36CUs with 64ROPs at a few hundred Mhz higher clock - and cache scrubbers - probably fits the problem really well. Not to mention, that as the SDF gets smaller/more distant the computational cost goes down, and vice versus, so frame rendering computational cost for a filled scene would be expected to be independent of geometry location.
 
Last edited:

Darius87

Member
I don't get why you would link to a video by someone that is just giving their take on the same info we have direct from Epic, and getting things wrong (4 triangles per pixel, not 1) and in the comments for that video, their response to a question starts with "I think...", so they are just speculating like the rest of us.
they not speculating there's real dev from epic in that video where he talks about nanite.
uploader probably isn't dev but video is absolute legit.
People keep mentioning meshes of polygons for nanite, but I'm pretty sure Epic don't use those words when referring to nanite rendering, and specifically say "triangle" - and only use those other words when mentioning the content in the context of the creation tools/pipeline, prior to the assets being brought into the demo and rendered by nanite.
no in that video dev mentioned meshes many times mesh is a bulk of triangles so it's just common term to say i don't see anything wrong with that or why dev couldn't use that term.
I'm more inclined to go with user onesvenus' take on things that nanite uses signed data field volumetric rendering - where geometry has a tiny memory footprint, because it describes the geometry perfectly by procedurally adding mathematical functions together to represent the geometry with infinite precision, and then rendered via a fragment shader - not a vertex shader pass needed for polygon primitives rendering.
it's your choice who to believe real dev from epic or onesvenus i've no doubt onesvenus is right what he's saying but it's not what UE5 demo uses. if onesvenus is right then i guess UE5 demo wouldn't have to stream any geometry which clearly isn't the case.
As for the 768MB streaming pool, don't you find it odd that it is 3/4 of a GB, exactly - and not some other varied size to fit the UE5 assets optimally?
Now that I've clocked the specific number, it looks decidedly like (a multiply of) the physical memory size for the eSRAM memory in the IO complex, which could be why Epic are able to provide that number, and still not reveal NDA hardware specs of the PS5.
that number has nothing to do with eSRAM or that is 3/4 of GB you overthinking. dev says that they will do better compression on nanite so size will be even smaller.
My logic is that the UE5 demo is to show off REYES, so it needs to be exhausting the available RAM or available bandwidth/latency by the next frame, or Epic have failed to make the demo look as good as it could - at the reveal - which I don't believe is the case, because the demo literally looks unreal IMHO.
no REYES in UE5 demo if that would be the case we would have already know it by now.
IMO, it would be impossible for the 768MB to be compressed geometry data, as you think, because you would need to store that 768MB in (unified) RAM, and what is the point in storing compressed data in RAM? Especially when all compressed data needed by UE5 demo should only be arriving in RAM as needed in an uncompressed state - because it was retrieved from the SSD by the IO complex, which has the dedicated task of decompressing data to save CPU/GPU doing such a compute hungry task, that they can't do at low latency.
did you even watch the video? dev explained everything.
the point is reduce size of geometry in current view gpu can decompress data in RAM by itself.
you don't know if every asset in RAM is uncompressed some data needs to be compressed in RAM to save space some might need to be uncompressed to save GPU resourses it's a dev choice saying only everything is uncompressed is not true.
 

PaintTinJr

Member
they not speculating there's real dev from epic in that video where he talks about nanite.
uploader probably isn't dev but video is absolute legit.

no in that video dev mentioned meshes many times mesh is a bulk of triangles so it's just common term to say i don't see anything wrong with that or why dev couldn't use that term.

it's your choice who to believe real dev from epic or onesvenus i've no doubt onesvenus is right what he's saying but it's not what UE5 demo uses. if onesvenus is right then i guess UE5 demo wouldn't have to stream any geometry which clearly isn't the case.

that number has nothing to do with eSRAM or that is 3/4 of GB you overthinking. dev says that they will do better compression on nanite so size will be even smaller.

no REYES in UE5 demo if that would be the case we would have already know it by now.

did you even watch the video? dev explained everything.
the point is reduce size of geometry in current view gpu can decompress data in RAM by itself.
you don't know if every asset in RAM is uncompressed some data needs to be compressed in RAM to save space some might need to be uncompressed to save GPU resourses it's a dev choice saying only everything is uncompressed is not true.
I watched some of the video, right up to the point where the video is factually wrong about the triangle count per pixel - as it contradicts the UE5 demo creators' video.

Then I l stopped and looked to see what the video author's background was. The person responding to the comments as cghero - the video channel's author AFAIK - wrote words that indicates they are speculating. If they work for Epic it doesn't mean they are a coder as part of the UE5 nanite/lumen demo team.

If that video is true, then fine, but lack of aliasing geometry and billions of source polygon assets, 8K textures in abundance, and full screen GI with infinity bounce in the demo suggests it isn't using polygon primitives as we know it, IMHO.
 

Darius87

Member
I watched some of the video, right up to the point where the video is factually wrong about the triangle count per pixel - as it contradicts the UE5 demo creators' video.
i watched and i didn't see any mention about triangle count per pixel, at what time he says that?
Then I l stopped and looked to see what the video author's background was. The person responding to the comments as cghero - the video channel's author AFAIK - wrote words that indicates they are speculating. If they work for Epic it doesn't mean they are a coder as part of the UE5 nanite/lumen demo team.
lol :messenger_grinning_smiling: i'm not talking about uploader i'm talking about dev in video he is Marcus Wassmer - engineering director, Epic games he's not speculating.

If that video is true, then fine, but lack of aliasing geometry and billions of source polygon assets, 8K textures in abundance, and full screen GI with infinity bounce in the demo suggests it isn't using polygon primitives as we know it, IMHO.
what? every 3D game uses primitives :messenger_grinning_smiling:
 

PaintTinJr

Member
i watched and i didn't see any mention about triangle count per pixel, at what time he says that?

lol :messenger_grinning_smiling: i'm not talking about uploader i'm talking about dev in video he is Marcus Wassmer - engineering director, Epic games he's not speculating.


what? every 3D game uses primitives :messenger_grinning_smiling:

He's not speculating about what? Linking to a random video with parts of an official source and expecting me to make your argument for you feels a little bit unclear.

What exactly has Marcus Wassmer said in the cghero nanite video you linked - please quote verbatim in text - that isn't in the official UE5 dev videos and also that contradicts something I've speculated on.
/edit:

Are you saying the entire video is just the UE5 engineer, and no one else gives a coverage. Ie this isn't a DF style video giving their spin on say, The Road to PS5? if so, I'll watch it in full and get back to you.
 
Last edited:

Darius87

Member
He's not speculating about what? Linking to a random video with parts of an official source and expecting me to make your argument for you feels a little bit unclear.
random video? :messenger_grinning_smiling: it's official video from unreal fest 2020.
What exactly has Marcus Wassmer said in the cghero nanite video you linked - please quote verbatim in text - that isn't in the official UE5 dev videos and also that contradicts something I've speculated on.
i doesn't contradict anything he's epic dev who is making UE5 i don't know what you want to me to quote? he isn't saying anything about amount of triangles per pixel that's your assumption.
 

PaintTinJr

Member
random video? :messenger_grinning_smiling: it's official video from unreal fest 2020.

i doesn't contradict anything he's epic dev who is making UE5 i don't know what you want to me to quote? he isn't saying anything about amount of triangles per pixel that's your assumption.
At the start of the video the person mentions cinematic models at 1 triangle per pixel can be imported - although the screen resolution isn't mention- but because I was unaware this isn't a random video by the uploader, I assumed the person was talking about nanite rendering, not conventional modelling, because nanites reveal video shows an overlay of the triangles and specifies 4 per pixel... so obviously my mistake, had I know it was legit and scrutinized the context, I would have realised that isn't a contradiction.

Anyway, will watch and respond, later :)
 
Last edited:

PaintTinJr

Member
Darius87 Darius87

I watched the video, and it is one I already watched - unabridged previously - when it first came out.

Watching it again was surprisingly worthwhile as I noticed some new subtleties in the phrasing, particularly at 7:40 that further suggest SDF type rendering, rather than polygon primitives for the >90% rigid geometry nanite renders — according to the screenshot slide at 8:46.

The audio accompanying the slide at 7:40 talking about performance of nanite says "rasterizing" to output gbuffer in 4.6ms. So what about the rest of the rendering pipeline stages costs? If nanite is a form of SDF rendering then only using the fragment shader is how SDF works, and that is just rasterizing.

The later slide at 8:46 has a section "Don't use with aggregates", which would also be tricky elements for SDF that could cripple performance by instance count, and probably complexity, that when simplified would make them a porous volume. The final line, suggests that nanite doesn't use "traditional rendering techniques" IMHO, otherwise, why would these types of geometry be excluded from nanite?

It seems you are correct about compressed data in RAM, but I understood it to be engine used data that is getting stored in compressed format, for rapid recovery, and was surprised it seems like they are compressing out to disk too - which might be bad for SSD lifespans - but that data is presumably going out through the IO complex to do the compression, rather than the GPU compressed data stored in RAM.

I noticed some other things, but would probably need to watch again, to jog my memory.
 

PaintTinJr

Member
Darius87 Darius87

Other observations I had are, that just after 8:32 in the video, it is stated that 1 million nanite triangles with a single UV map channel is the same storage as a 4K normal map.

So assuming 4096x4096x 4bytes(2bytes per component for unit length normal X & Y components, with Z derived in the shader) that gives 64MB per 1million nanite triangles, and then if we already know that the demo renders roughly 20M triangles per frame (from billions in the models) then that gives us 1280MB for geometry in an uncompressed state.

As they are compressing it to RAM with the GPU, as a low latency light computation compress , then that 40% reduction down to 768MB - for data that would compress heavily in an offline exhaustive compression, or via IO complex - seems to all line up, and is interesting that GPU compression is needed in spite of the IO complex.
On that basis, it makes sense they believe they can heavily optimised the GPU compression ratio with the same low latency and light processing burden, as they mention in the slide at the 8:00 mark.

On the question of : Are traditional primitives being used for nanite?

At 3:37 in the video they give a closeup of the ancient warrior's head. On the left is the untextured nanite render, and on the right is the untextured creation tools render with the wireframe primitives overlaid showing the extent of polygon density.

What I find interesting about this comparisons, is that this is an excellent way to visualise triangle density, but anyone familiar with the vertex pipeline, knows that to avoid z-fighting the creation tool isn't using a fast and inaccurate primitive assembly algorithm used in real-time games, and is also able to render the model in frustum layers - like a painters algorithm back to front - so that the instability of z calculations when projecting high volume polygon counts in small areas across a large near/far ratio can be divided and conquered to display the model perfectly.

Then when you look at how the three-way split sections at say 6:35 are rendered in nanite, you see that not only have they not used the wire-frame overlay technique, even on the right hand side view, and when looking at the millions of overlaying triangles on the left view you can see that none of them are z-fighting - despite having none of the creation tool's process time options that are needed with traditional rendering to render so accurately, at such polygon density. All of which IMHO points to nanite using some form of SDF rendering.
 

Lethal01

Member
Darius87 Darius87

Other observations I had are, that just after 8:32 in the video, it is stated that 1 million nanite triangles with a single UV map channel is the same storage as a 4K normal map.

So assuming 4096x4096x 4bytes(2bytes per component for unit length normal X & Y components, with Z derived in the shader) that gives 64MB per 1million nanite triangles,
This is something that might be useful to remember

At 3:37 in the video they give a closeup of the ancient warrior's head. On the left is the untextured nanite render, and on the right is the untextured creation tools render with the wireframe primitives overlaid showing the extent of polygon density.
I'm pretty sure both of those are Zbrush renders, not nanite.

Then when you look at how the three-way split sections at say 6:35 are rendered in nanite, you see that not only have they not used the wire-frame overlay technique, even on the right hand side view, and when looking at the millions of overlaying triangles on the left view you can see that none of them are z-fighting -
You really can't see that at all, youtube compression is way too high.
 

PaintTinJr

Member
This is something that might be useful to remember


I'm pretty sure both of those are Zbrush renders, not nanite.


You really can't see that at all, youtube compression is way too high.
Watch the video again, particularly the 3 way split renders in nanite, it is a different method of rendering geometry, and nothing to do with youtube compression, which is why you only see silhouette lines, but no conventional triangle_strip wireframe overlay - that would show primitive assembly - which is why z-fighting isn't an issue at all, despite scene complexity that would test conventional to its limits.

Edit.
Not only would z-fighting be a nightmare to alleviate in the nanite left-side view, but so would keeping primitive aliasing under control, because as the polygons get smaller and more abundant, the number of aliasing edges increases, and because they are so small, projected into just a few pixels with potentially overlapping z-fighting neighbours it becomes difficult to anti-alias - as is - with so many potential primitive assembly errors in one place, and is normally solved by super-sampling at a resolution where the errors aren't present, which clearly isn't happening if they are native ~1440p and using reconstruction upscaling to 4K30
 
Last edited:

Darius87

Member
The audio accompanying the slide at 7:40 talking about performance of nanite says "rasterizing" to output gbuffer in 4.6ms. So what about the rest of the rendering pipeline stages costs? If nanite is a form of SDF rendering then only using the fragment shader is how SDF works, and that is just rasterizing.
you should watch whole video they talk about lumen also sadly for now rendering cost for lumen fits only in 30fps game for PS5.
 

PaintTinJr

Member
you should watch whole video they talk about lumen also sadly for now rendering cost for lumen fits only in 30fps game for PS5.

No, that's a separate (deferred/supplemental) process, just rendering a flat basic triangle using conventional rendering has at least a vertex/geometry stage, and a fragment/rasterizer stage.

Nanite works without lumen, and will do on phone/tablet devices in all likelihood, or hardware too weak to support lumen. If nanite only uses rasterization then it isn't traditional 3D primitive rendering.
 

Lethal01

Member
you should watch whole video they talk about lumen also sadly for now rendering cost for lumen fits only in 30fps game for PS5.


"now" was like a year ago, and they said it will be running at 60fps on PS5.

Watch the video again, particularly the 3 way split renders in nanite, it is a different method of rendering geometry, and nothing to do with youtube compression, which is why you only see silhouette lines, but no conventional triangle_strip wireframe overlay - that would show primitive assembly - which is why z-fighting isn't an issue at all, despite scene complexity that would test conventional to its limits.

unknown.png


Like i said, Youtube compression/video quality makes it impossible to see whether there is zfighting or not.
 
Last edited:

PaintTinJr

Member
"now" was like a year ago, and they were planning to get it running at around 45fps on PS5.



unknown.png


Like i said, Youtube compression/video quality makes it impossible to see whether there is zfighting or not.
Z-fighting is noise and very noticeable at such traditional polygon densities, like a moire pattern, we would see it, because the youtube encoder wouldn't be able to remove it, and it would also be present in the right hand side (of your screen grab) and be more distracting when the scene is in motion.
With such a flawless render it would also impact the lighting with sporadic discontinuities too - which aren't visible either.
 
Last edited:

Lethal01

Member
Z-fighting is noise and very noticeable at such traditional polygon densities, like a moire pattern, we would see it, because the youtube encoder wouldn't be able to remove it,
The left side view is totally unreadable and any Z fighting would absolutely be hidden due to the compression here.

and it would also be present in the right hand side (of your screen grab) and be more distracting when the scene is in motion.
With such a flawless render it would also impact the lighting with sporadic discontinuities too - which aren't visible either.
From experience, I can say that switching to the object ID view like they are doing in the demo also tends to hide z fighting.
 
Last edited:
Not enough next gen games are out to compare. The games that are out doesn’t show any of the magic sauce Xbox fanboys have been wet dreaming about since the console launch. At least Sony fanboys have a few games that show what’s possible.
Well even though its a last gen game upgraded Gears 5 and Hivebusters DLC are very impressive technically. Super fast loading, high rock solid framerate and high resolution the majority of the time.
 

PaintTinJr

Member
The left side view is totally unreadable and any Z fighting would absolutely be hidden due to the compression here.


From experience, I can say that switching to the object ID view like they are doing in the demo also tends to hide z fighting.
It is readable IMO... in that the shape is still defined clearly, which under such circumstances - a view frustum with near plane probably 25cm away, and a far clip plane 5-10km away -would be z-fight mainly because the z precision would be so scarcely distributed for most of the frustum, at which point those polygons' fragments would be converging to 1.0 and losing the geometry shape - because they are pixel size - while fluctuating between nearly z=1.0 and 1.0, and fighting with neighbours IMHO.

Under normal rendering circumstances with z-fighting you would have many large triangles - in context that might be just a projected triangle covering more than 20pixels - which would retain the overall geometry shape in spite of z-fighting. Unless they are splitting the frustum heavily I can't see how they would be using 3D primitives instead of SDF, and especially as everything is soft shadowed, which is cheap to render well with an SDF, but expensive and virtually impossible to render traditionally without stable z values, and impossible to render well for a frustum of such distance between near far clip planes - AFAIK - so I'd need you to convince me, on how the scene was rendered - even forgetting the billions to millions of triangle feet they are achieving.
 
Last edited:
Top Bottom