• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

LucidFlux

Member
So this culling discussion...

I had originally written more but I want to focus on two points. The lossless argument and is Nanite considered culling.

From Epic "Nanite crunches down billions of polygons worth of source geometry losslessly to 20m drawn triangles" - Epic.

So Nanite is able to generate on the fly an infinite level of LODs for every object based on what the viewport requires for that specific frame. If an object is far enough from the camera that small details couldn't be seen, then they aren't drawn. THAT is what Epic means by lossless, The final frame wouldn't look any different if it was drawn with the full quality assets vs what nanite crunched down because the polygons are already as small as a pixel. The source geometry is also unchanged (although why would it be) So I guess it's lossless in that sense. In my world this is just called working non destructively, where you are preserving the original asset or image.

If you want to argue that what Nanite is doing is also considered culling then I do kinda see the point. In a broad sense it's reducing geometric complexity to increase performance. Same goals. However what Nanite is doing is not replacing traditional frustum culling but rather works in concert with it. Nanite first crunches down the scene to determine the necessary polygons for the frame, then the traditional culling methods are applied once the scene is built. Nanite has to do its work first creating all the unique LODs to determine the polygons for that frame before the culling pass.

So in a broad definition Nanite is culling the original assets on the fly. It is not however replacing or even performing traditional culling of back-faced, obstructed or off screen geometry.
 

M1chl

Currently Gif and Meme Champion
I mean it's probably not far fetched possibility, that game was just compiled with GDK, not build with GDK specific APIs. Probably for the notarization, to get approved.Does not mean that GDK does not have old functionality of past Dx SDK tho.
 
MellowHappyAnemonecrab-max-1mb.gif
Hell yeah! Let's slide into next gen!
 

IntentionalPun

Ask me about my wife's perfect butthole
@ TheThreadsThatBindUs TheThreadsThatBindUs and @ IntentionalPun IntentionalPun
They say about lossless probably because it doesn matter 20bln or 20mln, on 4K screen it will look almost the same. So for eye its lossless "zoom in zoom out". Like optical magnification.
Yes; I explained that. There's no loss in detail because there are only ~8 million pixels anyways on a 4k screen.

The engine does this automatically and dynamically scales assets as they take up more/less pixels on screen. (closer or farther from view)
 

Hashi

Member
Its hard to say for me what UE5 have on board with streaming. Details were (when that girl fly) like it was no LOD, and full poly count assets. Maybe they have technology to stream all positions of all triangles in real time...
I dont know
 

HoofHearted

Member
Enjoy to the meltdown, not exactly the most technical but it gives an idea:

It's practically the same, I just noticed lower resolution in raytracing reflection in a metal basket on series X in the final part of the video but couldn't mean nothing.


Sigh....

Somehow I find it rather ironic (or fitting?) that we're now to the point in this generation of nitpicking and comparing reflections in trashcans.....

Maybe Remedy is watching and they'll add it to their backlog of items to fix..
 

HoofHearted

Member
You: This is absurdly false. How can you call a reduction of polygons "lossless". You're losing information (in this case triangles) by definition.

Epic: There are over a billion triangles of source geometry in each frame, that Nanite crunches down losslessly to around 20 million drawn triangles.


You: "It isn't a form of scaling at all. With respect, you don't seem to know what you're talking about."

Epic: "Nanite geometry is streamed and scaled in real time so there are no more polygon count budgets"

lol
Some posters here really have issues. I mean, you consistently quote industry figures out of context, consistently misunderstand said quotes and then when other posters try to explain where you're incorrect in your reasoning you jump straight to ad hominem attacks.

it's pathetic and sad.


Girls ... Girls... GIRLS...

You're BOTH pretty.. :)
 

IntentionalPun

Ask me about my wife's perfect butthole
Girls ... Girls... GIRLS...

You're BOTH pretty.. :)
*shrug*

He got insulting when I was basically echo'ing Epic's own statements.

edit: I definitely escalated it though lol.. and fuck it... he could be right.. maybe Epic's "scaling" is largely done via culling.. still feel like I'm being trolled with how he responded to my statements that directly echo Epic

edit 2: removed my repetitive arguing
 
Last edited:
*shrug*

He got insulting when I was basically echo'ing Epic's own statements.

edit: I definitely escalated it though lol.. and fuck it... he could be right.. maybe Epic's "scaling" is largely done via culling.. still feel like I'm being trolled with how he responded to my statements that directly echo Epic

edit 2: removed my repetitive arguing

Lol... How the fuck do you expect anyone to respond to you when this is the level of puerile response you post:

God you are so full of shit.

Full of shit troll.

And somehow I was being insulting...

The Rock Reaction GIF by WWE
 

assurdum

Banned
Sigh....

Somehow I find it rather ironic (or fitting?) that we're now to the point in this generation of nitpicking and comparing reflections in trashcans.....

Maybe Remedy is watching and they'll add it to their backlog of items to fix..
The more annoying stuff about this channel is the author has no clue to build a proper comparison. The distance of the camera is not respected on both hardware too often (I'm asking if is intentional because it's unbelievable he is wrong so many times). Compares AF in a picture where it's difficult to spot the AF, same for shadows or the raytracing reflections in this specific case. Furthermore he takes strange conclusions without evidence (some elements are missing on ps5 raytracing, anedoctal he said); the only time I noticed a difference was the exact contrary. This guy is something else
 
Last edited:

LucidFlux

Member


They give and then they take it away 😔


What is the fucking reason, holy shit. It's Unreal 4 probably anyway.
Edit: It's still UE3 Apparently it was ported from UE3 to UE4. (although KF is unsure, could still be on UE3) This is why there is no raytracing though.
In addition to better textures, resolutions and frame rates there are also updates to lighting, FX, the UI, and controls.
All versions will run at 60fps.


While they aren't creating a separate "next-gen" version, Director Mac Walters stated "the game would experience some next-gen hardware perks for those playing on PS5 or Xbox Series X."
"There are some things that'll let you get to higher framerates, keep resolution higher, and stuff like that."
 
Last edited:

HoofHearted

Member
The more annoying stuff about this channel is the author has no clue to build a proper comparison. The distance of the camera is not respected on both hardware too often (I'm asking if is intentional because it's unbelievable he is wrong so many times). Compares AF in a picture where it's difficult to spot the AF, same for shadows or the raytracing reflections in this specific case. Furthermore he takes strange conclusions without evidence (some elements are missing on ps5 raytracing, anedoctal he said); the only time I noticed a difference was the exact contrary. This guy is something else
Yeah - all of these comparison videos are all over the place IMHO. I've seen this guy's work before - it's ok - but doesn't really show/tell much.

Overall - (at least to-date) - all the games (previous gen) released are so close.. it really doesn't matter.

The few event(s) were the first wave of games released (Dirt and AC:V) - and even those have been patched to yield very close results on both platforms.

I am somewhat interested in DF's analysis of Control UE (assuming they're doing one)...
 

Bo_Hazem

Banned
If devs opt to compress data on XSX to the same degree as they do on PS5, that will inevitably lead to slower decompression on it and everything that entails.

This is basic.

Yup, Kraken is 297% faster than ZLIB, but it can work on ZLIB decompressors (PS4/Xbox) but not as good as Kraken decompressor HW.

 

Bo_Hazem

Banned
You should tell that to them. Although, i doubt they'll even listen. They seem to think that the fridge meme has some sort of significance now, so they're just pushing things even further when it's not even necessary.

It worked great first time when they acknowledge it, but like a child they repeated it multiple times to the point it's insulting to look at and a mood breaker.
 
Last edited:
So this culling discussion...

I had originally written more but I want to focus on two points. The lossless argument and is Nanite considered culling.

From Epic "Nanite crunches down billions of polygons worth of source geometry losslessly to 20m drawn triangles" - Epic.

So Nanite is able to generate on the fly an infinite level of LODs for every object based on what the viewport requires for that specific frame. If an object is far enough from the camera that small details couldn't be seen, then they aren't drawn. THAT is what Epic means by lossless, The final frame wouldn't look any different if it was drawn with the full quality assets vs what nanite crunched down because the polygons are already as small as a pixel. The source geometry is also unchanged (although why would it be) So I guess it's lossless in that sense. In my world this is just called working non destructively, where you are preserving the original asset or image.

If you want to argue that what Nanite is doing is also considered culling then I do kinda see the point. In a broad sense it's reducing geometric complexity to increase performance. Same goals. However what Nanite is doing is not replacing traditional frustum culling but rather works in concert with it. Nanite first crunches down the scene to determine the necessary polygons for the frame, then the traditional culling methods are applied once the scene is built. Nanite has to do its work first creating all the unique LODs to determine the polygons for that frame before the culling pass.

So in a broad definition Nanite is culling the original assets on the fly. It is not however replacing or even performing traditional culling of back-faced, obstructed or off screen geometry.

Just to clarify, I don't think anyone was arguing that Nanite was replacing traditional frustum culling. If anything, the discussion kicked off on the back of a comment from Three Jackdaws Three Jackdaws whose rendering engineer colleague comments speculated that Epic is doing something a bit more novel than the traditional view frustum culling approaches.

My primary point is that "culling" as a term can be considered the general principle of discarding triangles prior to rendering. The approach doesn't have to directly relate to the view frustum like traditional approaches. An example is degenerate culling of sub-pixel triangles.

In which case, I was arguing Nanite's process for dynamically producing lower LOD meshes can be considered culling for sure. It's not view frustum, backface, occlusion, Z-culling, degenerate or contribution culling... but it's still involves culling in the most general sense.

Again to be clear, I'm not saying Nanite is culling. I'm saying that it includes perhaps a number of novel culling approaches. Fundamentally, if triangles are discarded from a mesh, you'd call that culling. So Nanite's dynamic generation of a lower LOD meshes would include culling in it's most general sense.

@ TheThreadsThatBindUs TheThreadsThatBindUs and @ IntentionalPun IntentionalPun
They say about lossless probably because it doesn matter 20bln or 20mln, on 4K screen it will look almost the same. So for eye its lossless "zoom in zoom out". Like optical magnification.

"Lossless" is a very specific and generally well defined term in computing.

It speaks to the reversible compression of data, i.e. a compression method that doesn't irreversibly reduce data. Meaning if you run the process in reverse, you will end up with your complete initial dataset with no information loss.

What I was arguing with the other guy was that when the GPU loads in the mesh data from memory and performs the culling operation on it to discard triangles, it isn't lossless, because you cannot run that culling operation in reverse and end up with the original dataset.

What Epic is talking about is in reference to the base asset stored on the mass storage device. Meaning that when Nanite performs it's geometry processing to produce a lower LOD model for rendering, the mesh data on the disc/HDD/SSD remains unchanged and therefore can be re-loaded in again and reprocessed in a different scene to produce a different LOD model for rendering.

The other guy seemed to fail to grasp that his initial point that I responded to was not saying the same thing that Epic was referencing.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
And somehow I was being insulting...

Yeah you were. Like I said, I escalated.. you were insulting in response to my comments that can be directly attributed to things Epic actually said about UE5, before I ever got insulting with you. Throwing "with respect" in front of it doesn't really make it not insulting lol
 

IntentionalPun

Ask me about my wife's perfect butthole
Epic is talking about is in reference to the base asset stored on the mass storage device. Meaning that when Nanite performs it's geometry processing to produce a lower LOD model for rendering, the mesh data on the disc/HDD/SSD remains unchanged and therefore can be re-loaded in again and reprocessed in a different scene to produce a different LOD model for rendering.

They quite literally say they are crunching that base asset down from 1 billion to 20 million polygons "losslessly". That's what nanite does.. that's what I said it does... it then scales that geometry to as close to the rendering resolution as possible, without any loss of detail, by scaling to a number of micro-polygons as close to the number of pixels as possible.

AKA it losslessly scales massive models to smaller models (likely storing the 20 million/greater than 4k model ondisk), then also scales them even smaller (losslessly, as in with no loss of detail) depending on actual rendering resolution and distance from the object. (aka how many pixels it's occupying)

You called all of this "absurdly false", and claimed "I don't know what I'm talking about."

You made 2 claims:

- The process that Epic describes as "lossless scaling" is actually largely involving culling
- That culling can never be lossless

You can't be right on both things dude.
 
Last edited:
They quite literally say they are crunching that base asset down from 1 billion to 20 million polygons "losslessly". That's what nanite does.. that's what I said it does... it then scales that geometry to as close to the rendering resolution as possible, without any loss of detail, by scaling to a number of micro-polygons as close to the number of pixels as possible.

AKA it losslessly scales massive models to smaller models, then also scales them even smaller (losslessly, as in with no loss of detail) depending on actual rendering resolution and distance from the object.

You called all of this "absurdly false", and claimed "I don't know what I'm talking about."

You're parroting the quote from Epic without even thinking critically about what you're saying.

If "lossless" means no reduction in data, then discarding hundreds of millions of polygons from a mesh, irreversibly (because the dynamically produced output model data newly stored in memory now holds no data relating to the discarded triangles---and of course they won't because that would be a waste of memory) then by definition it's not lossless.... i.e. what I said... it is also simple and just plain old logical. Think about it.

This of course cannot be reconciled with with Epic's comments of the approach being lossless. Therefore we have to ask the question, what is Epic referring to?

With a little thought, it becomes easy to realize that the Epic comments must be relating to the source assets on the disk.

When you consider what Nanite is doing versus the traditional approach it makes sense, i.e.:

Traditional approach
  1. Artist makes extremely high poly model in Maya/3D studio etc
  2. Artist has to author a number of different LOD level models for the above asset for the purpose of improving rendering efficiency
  3. All LOD level models will be stored within the game data on the disk and can be loaded into RAM as needed at runtime
  4. At runtime, the game simply loads in correct LOD level model for each asset as appropriate
Nanite Method:
  1. Artist makes extremely high poly model in Maya/3D studio etc
  2. Highest fidelity models are stored as part of the game data on the disk
  3. Nanite dynamically generates at runtime lower LOD level meshes using a combination of culling and scaling geometry for rendering as appropriate
  4. The newer lower LOD level meshes are not stored back on the disk replacing the original highest fidelity assets (thus from the perspective of the source asset, information is not lost... thus can be considered lossless)
  5. Thus in any given scene, Nanite can re-load the original highest fidelity mesh and regenerate new LOD meshes for rendering as the in-game scene changes
It's pretty simple and easy to understand when you give it a bit more thought instead of just taking Epic's admittedly pretty vague comments at face value.

Generally, most of computer graphics rendering is lossless in the sense that the game assets aren't overwritten during the course of the process of rendering the game. However, from the perspective of the data on the GPU and how it changes through the chain of operations the GPU performs during rendering, much of it can be considered lossy, because the input data, i.e. vertices, light maps, texture data, etc etc, is operated upon in an irreversible way.
 
Last edited:

Zoro7

Banned
Why can't you two take your fight to PM's? Trust me no one gives a shit. I don't even know how to @ someone here. lol fml
 
Why can't you two take your fight to PM's? Trust me no one gives a shit. I don't even know how to @ someone here. lol fml

I'm not fighting anyone anymore. Just clarifying the discussion while replying to LucidFlux LucidFlux and H Hashi .

There's really not much else going on in this thread anyway, so I don't see why discussion on the subject of Nanite and how it works should be prohibited.

It can make for some interesting discourse, provided people don't start getting emotional and being rude.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
This of course cannot be reconciled with with Epic's comments of the approach being lossless. Therefore we have to ask the question, what is Epic referring to?
I already explained what they are referring to, multiple times.. and stated what they were referring to in that very post you just quoted.
 
I already explained what they are referring to, multiple times.. and stated what they were referring to in that very post you just quoted.

You didn't explain anything. Your argument is that Epic says it's lossless, therefore it must be lossless. That's not an explanation.

What you did say:

AKA it losslessly scales massive models to smaller models (likely storing the 20 million/greater than 4k model ondisk), then also scales them even smaller (losslessly, as in with no loss of detail) depending on actual rendering resolution and distance from the object. (aka how many pixels it's occupying)

Doesn't make sense. Why?

Scaling at it's most granular level = changing the size of an object.

A polygonal mesh is a composite of multiple objects. You can scale the mesh, but that doesn't reduce the number of triangles in the mesh, it only changes their size.

If you remove triangles, you're removing information, as well as detail. How can you not be? How can you claim an 8million poly model can have the same detail as an 8billion poly model... it's impossible... thus an absurdly false claim.
 
Last edited:
  • Like
Reactions: Rea

IntentionalPun

Ask me about my wife's perfect butthole
You didn't explain anything. Your argument is that Epic says it's lossless, therefore it must be lossless. That's not an explanation.


There is no loss of detail, because it's pointless to have 1 billion polygons when you only have ~8 million pixels.


Yes; I explained that. There's no loss in detail because there are only ~8 million pixels anyways on a 4k screen.

The engine does this automatically and dynamically scales assets as they take up more/less pixels on screen. (closer or farther from view)


it then scales that geometry to as close to the rendering resolution as possible, without any loss of detail, by scaling to a number of micro-polygons as close to the number of pixels as possible.

2 of these are in direct response to you, both of them quoted by you.

1 of them was in my first response to you when you called what I was saying "absurdly false." (aka were being insulting.. so I called you a full of shit troll.)
 
Last edited:

LucidFlux

Member
IntentionalPun IntentionalPun TheThreadsThatBindUs TheThreadsThatBindUs , I think we can blame Epic for improperly using the term lossless in this context. It was not said in reference to compression AT ALL.

When epic said "Nanite crunches down billions of polygons worth of source geometry losslessly to 20m drawn triangles" They are saying the end result is lossless. Like if I handed you two images. One with source material, one from Nanite, You'd say it's the same fucking image.

I agree, it was not proper use of the term but it couldn't mean anything else. No other explanation makes sense. Of course Epic isn't altering the source asset each time a LOD is created for a frame, as TheThreadsThatBindUs TheThreadsThatBindUs just said it is the reference from which all LODs of that asset are produced. That wouldn't be called lossless either, but non destructive editing leaving the source intact. That's why I'm 99% sure they are just referring to the end result Nanite crunches being lossless or identical to the full resolution asset.

I think you guys both understand this and are just arguing semantics.
 
Status
Not open for further replies.
Top Bottom