• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

dcmk7

Banned
Don't worry, he will, it's called cognitive dissonance :)

"Cognitive dissonance refers to a situation involving conflicting attitudes, beliefs or behaviors. This produces a feeling of mental discomfort leading to an alteration in one of the attitudes, beliefs or behaviors to reduce the discomfort and restore balance."

Really cool article to read here :)

cognitive dissonance experiments

I love the results on the forced compliance study that had people putting round pegs into holes for 1h in a mind numbing boring task. They are then paid different sums of money to tell the next participant how fun the task was.

"Results

When the participants were asked to evaluate the experiment, the participants who were paid only $1 rated the tedious task as more fun and enjoyable than the participants who were paid $20 to lie.

Conclusion

Being paid only $1 is not sufficient incentive for lying and so those who were paid $1 experienced dissonance. They could only overcome that dissonance by coming to believe that the tasks really were interesting and enjoyable. Being paid $20 provides a reason for turning pegs, and there is therefore no dissonance."
Thanks for this. Really interesting read!
 

IntentionalPun

Ask me about my wife's perfect butthole
Appeal to authority fallacy is when you appeal to someone that's an expert "authority" in a given field and transpose that to a completely different one.
The term is also used when people point to statements from people of "scientific authority" who haven't backed up their opinion with "good science."

Granted in this scenario we are talking about people with experience developing for the XSS, so as long as people point to that experience, and not just their "authority', its' not valid to dismiss their statements.

They could end up being somewhat wrong; it could turn out that once cross-gen is over, the infamous "tooling" is more robust, etc. that the XSS will be a breeze to optimize for. One of the often quoted developers (Remedy) even suggested cross-gen is part of the problem, and some of the problems will go away.

But they still are at least, expressing serious concerns, form a position of both authority and experience.
 
You didn't address any of my points. Those devs you quote were old and also didn't address any of my points. You can bring your old quotes and I can bring some logic and reasoning. You could save yourself some time and just say I personally don't like the XSS and stop right there. Why not simply explain to us why SFS and VA will not address the RAM issues you keep bringing up in your ancient quotes? You want to say the XSS is a problem for current gen gaming give us the break down.

10 month old quotes and specs for XSS were already out then.
Id Software dev which said that XSS will be a problem for current gen development also worked on Id Tech engine which was used in Rage, Wolfenstein and Doom which also used PRT/Tiled Resources which is the same crap as SFS. He definitely knows what he said
 
Last edited:

SlimySnake

Flashless at the Golden Globes

i still dont understand how nanite works. They say how you import the full quality asset and the engine handles the rest. What is the engine doing? Downgrading that asset and creating different LODs for you? If so, could we potentially have games that improve ten years down the road when better GPUs arrive that can handle better LODs or are they already downgraded when they are put on disc?
 

PaintTinJr

Member
The last geometry feature alone will make this generation of games look so much better, and for some, it will be the generational leap that they think they've been missing when the next-gen only games come around using this (or the same feature in a different game engine since the idea is not proprietary just the execution)

And with this feature, streaming geometry (NOT TEXTURES) will be a thing everyone will talk about when devs start using it.
I'm not entirely convinced about the streaming geometry part... but only because in the UnrealFest nanite video, Brian (IIRC) states that 1Million nanite triangles with a UV channel is roughly the same storage as a traditional 4k(4096x4096) normal map, which excluding lossy or lossless compression should be 64MB per million triangles (AFAIK) meaning 20million rendered in UE5 demo (per frame from billions) would only need 1280MB(62.5GB for a 1billion triangles), whereas for the ancient warrior (33M zbrush polygons, comprised of three parts, they state they use 24, 8K (8192x8192) textures (192MB per model texture, 4.5GB total), and the billions of source polygons in every scene are supposedly textured to the same fidelity.

If the warrior grave yard has even as little as 1,000 model instances, with 10 unique models to produce that scene, we'd be looking at 45GB of raw source textures (uncompressed, whether lossy or lossless, or PRT :) ) textures in just that scene - but probably substantially more as 10 unique nanite models per scene probably wouldn't make things easier for artists as they claim - so I'm guessing the texturing for nanite is more IO bandwidth consuming than the geometry, and by an order of about 10:1.

In the UnrealFest video they also stated the geometry streaming pool for nanite is 768MB and can be compressed better with optimization. I inferred from the slide that the GPU does a light compression of a frame's geometry, straight after rendering, and stores it in RAM as a geometry cache - which if the PS5 allocates even as little as10GBs for UE5 demo, would mean 9.25GB is for game data and textures, hence my 10:1 prediction.
 
Last edited:

Zadom

Member
Riky's comment is super fucking weird but thats what happens when you all gang up on one dude and push a man too far. At this point, we know what Riky prefers so lets just move on instead of trying to have heated debates with him every single day. It's Riky who lost it today, it will be someone else tomorrow. Clearly he was a bit frazzled by t
I've noticed you have a tendency to add words that were not said. I at no point said anything about 'console breaking'. You and I can disagree on what is 'serious', but to me serious does not mean console breaking. That would be a 'fatal' flaw and I have never said the PS5 is fatally flawed. To me coil whine IS serious because it is uncertain if it will ever be addressed and I am reminded of it every time I turn the system on. The BC thing IS serious because another company is offering a significantly better experience for the same price. Things like the UI issues are serious because it affects my enjoyment of using the system. Most of these things can be fixed and I hope they will be with time. It is still missing the ability to upgrade it's internal storage and features like VRR and 1440p support are absent. No one is crapping on the PS5 I am just realistic about it. I suppose that comes from experiencing both consoles (XSX+PS5) first hand so I can see(and hear) the difference.


I have no idea. When a developer has to go back and write a specific PS5 version just to utilize its features that is a disadvantage. How long will it take for Biomutant to get patched? 3 months? 6? By the time a PS5 version is complete most of the games biggest fans would have already moved on. They are essentially saying it will get better just wait till some new patch comes out at some uncertain date. That isn't a great thing.


Who said that last gen advantages was proof of next gen superiority? MS approach to running games on their platform is certainly superior when a developer can make a game and get multiple outcomes depending of the platform it is running on. It is more than just BC because as people have stated over and over again the PS5 does BC so if that was case both results would be the same.

In many cases games run BETTER than their last gen version with higher resolutions and framerates. That isn't just BC. It is an objectively better way to receive games. Like how game saves work across different versions of cross gen titles and how smart delivery get you the best version of a game based on the platform. MS was just more thoughtful about how these things work. Just calling it BC is the best way to argue for brand over reality. The times they trade blows is ONLY when a native PS5 version of game is made. Perhaps in a few years time when ONLY current gen games will be made that will be fine but in a era when people are struggling to get newer consoles it matters now.

None of this being true takes away from anyone preferring to play Sony's first party titles so if that is the most important thing to you so be it. I tend to play more 3rd party stuff and in many cases it appears MS solution works best for me.





I was HOPING you guys would have some comments from at least this year! The best you can do it point out complaints made BEFORE the XSS even came out. On top of that none of them address things like SFS and VA things to specifically deal with RAM management. Since you guys did your research I'll give a link for everyone else.


Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X and Xbox Series S hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2.5x multiplier on average on both amount of physical memory and SSD performance.

2.5x multiplier on average. Do you guys happen to have the comments from the devs you posted from ages ago on how SFS will affect the complaints they made? Can you provide ANY evidence that these features will NOT address their concerns? Are you guys even AWARE that the XSS has the same feature set as the XSX? Are you aware that the XSS is NOT a 4K device and therefore will not need 100GB of memory. It has the same CPU, It has the same SSD. The comments the complaining devs made were not reflective of what development on the XSS will be with a mature dev kit. Bernd Lauert Bernd Lauert also made an excellent point.


Brings a tear to my eye. Someone gets it! Thank you.

After all that was said the XSS STILL produces a higher resolution image at 60fps with raytracing on games like RE8 than the X1X(which has more RAM!!) and higher resolution on Biomutant than the PS5 and X1X! Despite the complaints, despite the misinformation the XSS is still providing some pretty impressive outcomes and all that WITHOUT using some of the systems core features. I'll never know why people just can't accept that the XSS is a fantastic value that offers features more expensive consoles lack. It has to simply be console tribalism because at this point the XSS has proven itself at this point. More can be addressed if/when a REAL issue comes up.
That’s it. I’m revoking your internet license for absurd opinions. Sorry it’s come to this, but it’s for the benefit of the public. Keep your record clean and you can apply for reinstatement in 6 months.
sP8bly1.png
 

RaySoft

Member
i still dont understand how nanite works. They say how you import the full quality asset and the engine handles the rest. What is the engine doing? Downgrading that asset and creating different LODs for you? If so, could we potentially have games that improve ten years down the road when better GPUs arrive that can handle better LODs or are they already downgraded when they are put on disc?
The only way I can see this work is that the master data (8k textures etc) are only available on the devs' network. As soon as you set your target, the engine dowscales all assets accordingly to that platform whether it being phones, switch, xbox's, ps4, ps5 or PC. All assets shiped with the game are then the downscaled assets for that perticular platform.
 
Last edited:

arvfab

Banned
The quotes are less than a year old.. You asked for a quote, you didn't specify ANY time limitations. Putting a time limit on quote is weird as hell though.. almost as if you are attempting a strawman here.

But, I did see your post. I saw you dismissing his comments by very ignorantly and rudely saying the lead engine developer doesn't understand the new features he can use, therefore you weren't going to take his comments seriously. It might be difficult to believe but he knows faaaar more about game development then you.

But it's ok, I understand you're massively into the XSS, and you don't want to see its name besmirched. I sort of get it.

But I think you need to see it's performance dispassionately. It's not doing a good enough job at the moment. I mean explain why 570p resolutions at a very unsteady 120fps, in 2021, is good enough? It shouldn't be.

XSS doesn't even seem like it can run cross gen titles without it being heavily compromised, I mean 45fps (what the hell is that.. innovation?), 900p without raytracing, and a tiny harddrive can barely fit CoD on. Isn't good enough.

So like it or not, the PS5DE is the entry point, on price, for next gen consoles which can run titles without any caveats.

It's fully future-proofed up as well, runs 120hz better. 4k gaming. Quicker loading, more immersive controller, and a lot more hard-drive space to boot. All that, for only £100 extra.

I don't think you would be able to find a console which can run games at the performance of PS5DE for less.

It's no wonder they are all sold out whilst XSS are readily available here.

I'm sure you would begrudgingly agree to that, when you think about it without brand allegiance involved.
They could give the PS5 away for free, and he would say XSS offers more value.
 

Fafalada

Fafracer forever
I'm not entirely convinced about the streaming geometry part... but only because in the UnrealFest nanite video, Brian (IIRC) states that 1Million nanite triangles with a UV channel is roughly the same storage as a traditional 4k(4096x4096) normal map, which excluding lossy or lossless compression should be 64MB per million triangles (AFAIK)
The fact they reference 'normal map' is implying DXTC compression, so likely 16MB, not 64. This is fairly standard size for geometry (1 UV, 1 Normal, 1 Position) in 16 bytes - we've had that level of compression support in hardware from (and including) PS2 onwards - though of course we don't know what other data 'nanite triangle' might include.

That said - when it comes to disk/memory usage - you're conflating 'per frame' with 'per level' detail density. I've ran the math on that sometime ago, and to achieve the fidelity displayed in the demo (~1mm precision of triangle/texture detail) the storage requirements with the estimates Epic gave would come well over 100 GBs for the approximate size of the level on display - that's accounting for tunnel-optimizations of player being on very restricted path through the level. If that entire level that's visible at the end had to be detailed(of course it wasn't in the demo) - we'd be talking multiple terabytes.
 

IntentionalPun

Ask me about my wife's perfect butthole
I'm not entirely convinced about the streaming geometry part... but only because in the UnrealFest nanite video, Brian (IIRC) states that 1Million nanite triangles with a UV channel is roughly the same storage as a traditional 4k(4096x4096) normal map, which excluding lossy or lossless compression should be 64MB per million triangles (AFAIK) meaning 20million rendered in UE5 demo (per frame from billions) would only need 1280MB(62.5GB for a 1billion triangles), whereas for the ancient warrior (33M zbrush polygons, comprised of three parts, they state they use 24, 8K (8192x8192) textures (192MB per model texture, 4.5GB total), and the billions of source polygons in every scene are supposedly textured to the same fidelity.

If the warrior grave yard has even as little as 1,000 model instances, with 10 unique models to produce that scene, we'd be looking at 45GB of raw source textures (uncompressed, whether lossy or lossless, or PRT :) ) textures in just that scene - but probably substantially more as 10 unique nanite models per scene probably wouldn't make things easier for artists as they claim - so I'm guessing the texturing for nanite is more IO bandwidth consuming than the geometry, and by an order of about 10:1.

In the UnrealFest video they also stated the geometry streaming pool for nanite is 768MB and can be compressed better with optimization. I inferred from the slide that the GPU does a light compression of a frame's geometry, straight after rendering, and stores it in RAM as a geometry cache - which if the PS5 allocates even as little as10GBs for UE5 demo, would mean 9.25GB is for game data and textures, hence my 10:1 prediction.
I'm convinced those textures are applied to the "source models" before scaling for disk storage myself, and then it's all dynamically scaled based on distance from the player for rendering.

I don't think they are storing several 8k textures per object, on disk.
 

PaintTinJr

Member
The fact they reference 'normal map' is implying DXTC compression, so likely 16MB, not 64. This is fairly standard size for geometry (1 UV, 1 Normal, 1 Position) in 16 bytes - we've had that level of compression support in hardware from (and including) PS2 onwards - though of course we don't know what other data 'nanite triangle' might include.

That said - when it comes to disk/memory usage - you're conflating 'per frame' with 'per level' detail density. I've ran the math on that sometime ago, and to achieve the fidelity displayed in the demo (~1mm precision of triangle/texture detail) the storage requirements with the estimates Epic gave would come well over 100 GBs for the approximate size of the level on display - that's accounting for tunnel-optimizations of player being on very restricted path through the level. If that entire level that's visible at the end had to be detailed(of course it wasn't in the demo) - we'd be talking multiple terabytes.
True about the DXTC/3dc, but depending on the maths involved to construct the geometry from that source data, it might not handle lossy compression, so I was being conservative, and giving the opposite argument - larger IO bandwidth for geometry, the same advantage I was giving to textures.

I personally think the geometry is using signed distance field ray marching, hence the artist (in the nanite video on the previous page) dropping in a huge slab of rock (a megascan), and not worrying about clipping against existing scene geometry.
 

PaintTinJr

Member
I'm convinced those textures are applied to the "source models" before scaling for disk storage myself, and then it's all dynamically scaled based on distance from the player for rendering.

I don't think they are storing several 8k textures per object, on disk.
I'm not so sure. Texturing, and texture compression and lossless compression are very mature technologies for UE, and Sony as far back as PS3 with SPUs doing zlib decompression in games like KZ3 IIRC.

The IO bandwidth and latency provided by the IO complex - and async compute in the GPU - make me believe that dealing with 4.5GB per zbrush warrior model, would be more like 4:1 with Block compress (1.125GB), another 10%-50% with oodle textures (maybe 1GB or 600MB) and then if using signed distance fields for rendering, you'll get perfect PRT lookups, so you'll get another 2x multiplier in most cases, giving 500MB or 300MB per model. which seems manageable - but I obviously could be wrong.
 
That's why I called that his argument was a strawmen. I specified cross-gen vs gen-aware, and Riky didn't address it, he simply brought something I hadn't even discussed.

Also, convenient that you ignore the argument he made around "Village runs better" on 1 section, but then you point out the same for Hitman. At least be coherent with your criticism.

Finally, I stated the differences and said they both trade blows. The argument that one is "clearly superior / superior" is just a lie. I don't care what console people buy, whatever makes them happy.

My main problem is the hypocrisy around Xbox's defence force and the amount smokescreens being thrown just because they can't be happy with what they have is insane. This is not about "what console is better", this is about "playstation must be worse". This is about me stating that the Xbox is the superior console for BC and that not being enough for little console warriors, because it must not only be the superior BC machine, it must reign supreme amongst everybody, and don't you dare suggest otherwise. This is why Sony getting a timed exclusive is considered capital punishment, but MS buying Zenimax is A-OK. (I'm against timed exclusives btw).

People should be grateful they have two very powerful consoles, each with their own strengths and weaknesses, but instead I find myself commenting and either being called a pedophile (btw, one week ban for calling someone a pedophile (y)) or dealing with bad faith arguments that completely derail the original topic. The amount of toxic discussion, mostly (not all of it) coming from "one side" is absolutely insane.



"all gang up on one dude". Are you for real? He's the one that can't have a civil discussion, blatantly lies on his comments (as pointed by many users), cherry pics and misrepresents data to suit his narrative... I'm sorry, but that doesn't make him a victim. People dog pile on him because he can't hear the words "PS does X better than Xbox" without going ballistic. Also, the immature LOL and Triggered emojis everyt ime someone politely disagrees with him show that he is not looking for a discussion, but trying to get a reaction.

Btw, no matter how triggered I may get, there's certain lines I'll not cross. Calling someone a pedophile is one of them.

That said, end rant and end mentioning Riky. I'm blocking him, and that's that.

He’s the biggest clown on here. Trying to discuss control with him was a nightmare. Multiple times he tried to change the narrative after he ignored facts.
Once to vrr, then to expandable storage (which he says in every post).

The guys a lunatic. 90% of his posts are ‘yeah but Xbox is better’ I blocked him ages ago and I still chuckle when I see him and his bumchum using the triggered emojis on my posts.

and him calling people pedos is abit worrying from a grown man who’s kid is old enough to play a paradox game; there must be something wrong if he spends all day on here warring.
 
Last edited:
10 month old quotes and specs for XSS were already out then.
Id Software dev which said that XSS will be a problem for current gen development also worked on Id Tech engine which was used in Rage, Wolfenstein and Doom which also used PRT/Tiled Resources which is the same crap as SFS. He definitely knows what he said
NOPE!


"SFS is based on PRT+, and PRT+ is based on PRT&Sampler Feedback. SFS it’s a complete solution for texture streaming, containing both hardware and software optimizations.

Firstly, Microsoft built caches for the Residency Map and Request Map, and records the asset requests on the fly. The difference between this method and traditional PRT methods is kinda like, previously you have to check the map but now you have a gps.

Secondly, you need a fast SSD to use PRT+ and squeeze everything available in the RAM. You won’t want to use a HDD with PRT+, because when the asset request emerges, it has to be answered fast (within milliseconds!). The SSD on Xbox is now priotized for game asset streaming, to minimize latency to the last bit."

He can believe what ever he wants but it is totally possible that he was making an assessment based on old information especially since his comments were made LAST YEAR before the XSS even hit store shelves. The XSS just so happens to have the SSD and CPU power to run current gen games at 1080p. I'd like to hear a detailed explanation why this won't address the old complaints made. No one thought the XSS would be running so many games at 120fps either so it's clear people made judgments prematurely. It's a pretty impressive device.
 
Last edited:
Btw, no matter how triggered I may get, there's certain lines I'll not cross. Calling someone a pedophile is one of them.

I don't ever use personal attacks because I don't believe they add anything to a discussion. I can understand when a debate gets extremely heated people might resort to them. But what Riky Riky did was a bit much in my opinion. Hopefully he learns from this and I'm pretty sure if he does it again they won't give him any second chances. Paedophilia is a serious crime that can ruin the lives of it's victims. Definitely not something that anyone should be using lightly.
 

IntentionalPun

Ask me about my wife's perfect butthole
I'm not so sure. Texturing, and texture compression and lossless compression are very mature technologies for UE, and Sony as far back as PS3 with SPUs doing zlib decompression in games like KZ3 IIRC.

The IO bandwidth and latency provided by the IO complex - and async compute in the GPU - make me believe that dealing with 4.5GB per zbrush warrior model, would be more like 4:1 with Block compress (1.125GB), another 10%-50% with oodle textures (maybe 1GB or 600MB) and then if using signed distance fields for rendering, you'll get perfect PRT lookups, so you'll get another 2x multiplier in most cases, giving 500MB or 300MB per model. which seems manageable - but I obviously could be wrong.
Interesting..

Either way would love to know more about the data on disk for that demo, and to see something like an I/O counter.

It's the most interesting gaming tech revealed in ages.
 
NOPE!
SFS is based on PRT+, and PRT+ is based on PRT&Sampler Feedback. SFS it’s a complete solution for texture streaming, containing both hardware and software optimizations.

Firstly, Microsoft built caches for the Residency Map and Request Map, and records the asset requests on the fly. The difference between this method and traditional PRT methods is kinda like, previously you have to check the map but now you have a gps.

Secondly, you need a fast SSD to use PRT+ and squeeze everything available in the RAM. You won’t want to use a HDD with PRT+, because when the asset request emerges, it has to be answered fast (within milliseconds!). The SSD on Xbox is now priotized for game asset streaming, to minimize latency to the last bit."

Oh YES!
He can believe what ever he wants but it is totally possible that he was making an assessment based on old information especially since his comments were made LAST YEAR before the XSS even hit store shelves. The XSS just so happens to have the SSD and CPU power to run current gen games at 1080p. I'd like to hear a detailed explanation why this won't address the old complaints made. No one thought the XSS would be running so many games at 120fps either so it's clear people made judgments prematurely. It's a pretty impressive device.


He can believe what ever he wants but it is totally possible that he was making an assessment based on old information especially since his comments were made LAST YEAR before the XSS even hit store shelves. The XSS just so happens to have the SSD and CPU power to run current gen games at 1080p. I'd like to hear a detailed explanation why this won't address the old complaints made. No one thought the XSS would be running so many games at 120fps either so it's clear people made judgments prematurely. It's a pretty impressive device.

Wait, what?? Devs started to make games for console when hit the store shelves?? Never knew that. That's why Xbox doesn't have exclusives. LOL
He can believe whatever he wants, yet he is a Xbox dev now. And somehow you know better. :/ Also, you meant cross-gen games, not current gen. There is just 1 current gen game on Xbox - The Medium, which is dynamic 1080p during full screen at 30fps on XSS.
 
Last edited:

dcmk7

Banned
NOPE!


"SFS is based on PRT+, and PRT+ is based on PRT&Sampler Feedback. SFS it’s a complete solution for texture streaming, containing both hardware and software optimizations.

Firstly, Microsoft built caches for the Residency Map and Request Map, and records the asset requests on the fly. The difference between this method and traditional PRT methods is kinda like, previously you have to check the map but now you have a gps.

Secondly, you need a fast SSD to use PRT+ and squeeze everything available in the RAM. You won’t want to use a HDD with PRT+, because when the asset request emerges, it has to be answered fast (within milliseconds!). The SSD on Xbox is now priotized for game asset streaming, to minimize latency to the last bit."

He can believe what ever he wants but it is totally possible that he was making an assessment based on old information especially since his comments were made LAST YEAR before the XSS even hit store shelves. The XSS just so happens to have the SSD and CPU power to run current gen games at 1080p. I'd like to hear a detailed explanation why this won't address the old complaints made. No one thought the XSS would be running so many games at 120fps either so it's clear people made judgments prematurely. It's a pretty impressive device.
They get development kits before it hits the shelves... You are aware of that aren't you?

You're blatantly trolling now, how can you state that a lead engine developer doesn't know what's happening in the gaming tech scene?

This claim of yours is pure fantasy. It hasn't got a single shred of truth to back it up.

No one thought the XSS would be running so many games at 120fps either so it's clear people made judgments prematurely. It's a pretty impressive device.
Some titles even run that mode at sub 600p. VHS like quality. Which is impressive for 2021.
 
Last edited:

PaintTinJr

Member
Interesting..

Either way would love to know more about the data on disk for that demo, and to see something like an I/O counter.

It's the most interesting gaming tech revealed in ages.
As a second thought on another reason why they'd want to keep texturing for UE5, is that it is supported across all target hardware and the quality scales predictably in a linear fashion while the storage quarters. Texture lookup results can be easily enhanced with anisotropic filtering, providing almost identical IQ, so long as the minification or magnification results in the appropriate mipmap(s) being picked and suitably filtered.

By contrast, geometry that isn't defined mathematically has no formally correct way to have a LoD reduction that is correct for an arbitrary reduction in triangles, because it is a lossy process in too many dimensions, so what might provide a better match in terms of closest volume - to the higher LoD - might not provide the better match for its shape, and maybe not represent - the higher LoD - faces from all directions equally well. And then when you factor in normal mapping texturing of a lower LoD model to try and match the IQ - at a smaller projection - of the higher LoD version, you add in another dimension, and the need for artistic skill becomes greater to get an optimal representation.

Thinking about the geometry LoD problem is certainly reason enough for them to consider using SDF representation of the zbrush quality models, as it moves the LoD problem from model space to the view frustum projection of geometry in worldspace - which like textures makes it a 2 dimension problem again (AFAIK).
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
By contrast, geometry that isn't defined mathematically has no formally correct way to have a LoD reduction that is correct for an arbitrary reduction in triangles, because it is a lossy process in too many dimensions

Well what's interesting is that Epic describes their scaling as lossless; I believe what they mean is that they always have at minimum 1 triangle per pixel. You can't actually display more triangles than pixels, so there's no visible detail lost. That same scaled model displayed at a resolution higher than the number of pixels, would appear lossy scaled.
 
Last edited:

PaintTinJr

Member
Well what's interesting is that Epic describes their scaling as lossless; I believe what they mean is that they always have at minimum 1 triangle per pixel. You can't actually display more triangles than pixels, so there's no visible detail lost.
But that's at the point of rendering, so the data is 2dimensional at the point it is in the viewport.

The LoD creation problem is in modelspace and is 3 dimensional ( or 4 dimensional when you include normal mapping.) artistic skill.
 
Last edited:
I'm guessing the texturing for nanite is more IO bandwidth consuming than the geometry,
Yes, I believe that is true. I think there will be a platform hard limit for maximum polygons per frame and the objects will be scaled back to fit that limit per frame with an overall scene analysis which will happen behind the scenes on engine level in the mastering phase of the software.
I'm convinced those textures are applied to the "source models" before scaling for disk storage myself, and then it's all dynamically scaled based on distance from the player for rendering.

I don't think they are storing several 8k textures per object, on disk.
Yup, my guess is like what you said. So, the closest the viewport is to the object in question, it streams the limiting maximum number of polygons for that single object the platform can render given any frame. And the limit is more I/O dependant than it is VRAM which is also still part of the limiting factor.
 

IntentionalPun

Ask me about my wife's perfect butthole
But that's at the point of rendering, so the data is 2dimensional at the point it is in the viewport.

The LoD creation problem is in modelspace and is 3 dimensional ( or 4 dimensional when you include normal mapping.) artistic skill.
Not quite sure what you mean; if there aren't enough pixels to even come close to rendering the detail of an object, why does it matter if the scaled object would look the exact same as the non-scaled one?

The only thing that would matter is if you zoomed up close to something; but they are storing objects at several times the amount of the output resolution, so they do account for quite a bit of zooming.
 

PaintTinJr

Member
Not quite sure what you mean; if there aren't enough pixels to even come close to rendering the detail of an object, why does it matter if the scaled object would look the exact same as the non-scaled one?

The only thing that would matter is if you zoomed up close to something; but they are storing objects at several times the amount of the output resolution, so they do account for quite a bit of zooming.
The problem is redundant processing and performance. You not only want to render at perfect IQ for the distance in the scene, but you want to do it with minimal waste.

Take that ancient warrior. Under traditional rendering circumstances, projecting it to just 5 pixels on screen would probably result in a LoD version with less than 60 polygons being chosen and rendered. But if you don't have any LoDs to pick from, then how do you decided what to render, that is less than the 33M from the zbrush model?

That's why I think they are using SDFs to mathematically represent the models.
 

M1chl

Currently Gif and Meme Champion
i still dont understand how nanite works. They say how you import the full quality asset and the engine handles the rest. What is the engine doing? Downgrading that asset and creating different LODs for you? If so, could we potentially have games that improve ten years down the road when better GPUs arrive that can handle better LODs or are they already downgraded when they are put on disc?
Aren't those mesh shader, like a definition? Isn't nanites something like what is suppose to be "The Unlimited detail" engine 10 or so years ago? That you have fixed amount of points on screen which creates projection (or maybe approximation) of the 3D scene...
 
Oh YES!


Wait, what?? Devs started to make games for console when hit the store shelves?? Never knew that. That's why Xbox doesn't have exclusives. LOL
He can believe whatever he wants, yet he is a Xbox dev now. And somehow you know better. :/ Also, you meant cross-gen games, not current gen. There is just 1 current gen game on Xbox - The Medium, which is dynamic 1080p during full screen at 30fps on XSS.
Since you seem to know him so well why not get a recent comment so he can confirm your theories? You can even explain why he was right and other devs who said there was no problem are wrong. If the best you got is comments before the system came out when it was a pretty well known fact that the Xbox GDK was immature and early games had issues then I'm thinking you are not very sincere in your commentary.
They get development kits before it hits the shelves... You are aware of that aren't you?

You're blatantly trolling now, how can you state that a lead engine developer doesn't know what's happening in the gaming tech scene?

This claim of yours is pure fantasy. It hasn't got a single shred of truth to back it up.


Some titles even run that mode at sub 600p. VHS like quality. Which is impressive for 2021.
As I told your friend above the GDK has been updated regularly. If you are so certain in your viewpoint it should be pretty easy to find some recent comments dealing with things as they are now not last year man. Even ethomaz ethomaz was willing to say MS engineers were wrong and confront him directly on Twitter. It didn't go as he thought but he was willing to stand on principle.

As far as your sub 600p comments it's something vs nothing. I believe something wins every time. People like options. Options some platforms have in more abundance than others. 😉
 
Riky's comment is super fucking weird but thats what happens when you all gang up on one dude and push a man too far. At this point, we know what Riky prefers so lets just move on instead of trying to have heated debates with him every single day. It's Riky who lost it today, it will be someone else tomorrow. Clearly he was a bit frazzled by the constant dogpiling. I think it's time for you all to start using the ignore feature a bit more. Trust me it helps.
Pour One Out Malt Liquor GIF
 

ethomaz

Banned
Since you seem to know him so well why not get a recent comment so he can confirm your theories? You can even explain why he was right and other devs who said there was no problem are wrong. If the best you got is comments before the system came out when it was a pretty well known fact that the Xbox GDK was immature and early games had issues then I'm thinking you are not very sincere in your commentary.

As I told your friend above the GDK has been updated regularly. If you are so certain in your viewpoint it should be pretty easy to find some recent comments dealing with things as they are now not last year man. Even ethomaz ethomaz was willing to say MS engineers were wrong and confront him directly on Twitter. It didn't go as he thought but he was willing to stand on principle.

As far as your sub 600p comments it's something vs nothing. I believe something wins every time. People like options. Options some platforms have in more abundance than others. 😉
First he come to talk with me and never answered the question just dodged... next time he wants to enter in somebody else conversation it is better to be honest to begin.

Why they choose non-RDNA 2 parts including GCN for their GPU?
 
Last edited:
First he come to talk with me and never answered the question just dodged... next time he wants to enter in somebody else conversation it is better to be honest to begin.

Why they choose non-RDNA 2 parts including GCN for their GPU?
Hey man I can only assume you are more knowledgeable than the MS people. I guess they should be lucky the Xbox can compete with PlayStation's performance at all.
 

ethomaz

Banned
Hey man I can only assume you are more knowledgeable than the MS people. I guess they should be lucky the Xbox can compete with PlayStation's performance at all.
That is not about being knowledgeable than MS but MS engineer lying.

BTW that have nothing to do with performance or competition... it is how the GPU silicon was build... it is public already and MS doesn’t need to lie about it.

A easy answer should be something like “while RDNA 2 chips has some parts for PC it not we find it not required to a console so we choose to a old IP in some parts not affecting overall RDNA 2 features”.
 
Last edited:

Fafalada

Fafracer forever
True about the DXTC/3dc, but depending on the maths involved to construct the geometry from that source data, it might not handle lossy compression, so I was being conservative, and giving the opposite argument - larger IO bandwidth for geometry, the same advantage I was giving to textures.
The estimate Epic gave puts 16Bytes per triangle vs. 8bits for a texel (both compressed). Assuming same per-pixel density of detail for both, geometry eats a lot more bandwidth (which is how it's always been, we just use several orders of magnitude less detail in geometry shapes vs. texture in games to-date, that's why datasets are smaller).

But if you don't have any LoDs to pick from, then how do you decided what to render, that is less than the 33M from the zbrush model?
There's all but guarantee to be a hierarchical representation of the data - both for rendering and streaming efficiency (even if it's using Signed Distance Fields, you'd be storing the thing in some type of octree or the like), and you can think of different hiearchies as discrete LOD levels. Refer to how texture LODs work for a simplified representation.
 
That is not about being knowledgeable than MS but MS engineer lying.

BTW that have nothing to do with performance or competition... it is how the GPU silicon was build... it is public already and MS doesn’t need to lie about it.

A easy answer should be something like “while RDNA 2 chips has some parts for PC it not we find it not required to a console so we choose to a old IP in some parts not affecting overall RDNA 2 features”.
Strong claims to accuse someone of lying. What was lied about and how significant is that alleged lie to customers and the performance of the device? Most importantly can you prove it?
 

onesvenus

Member
There's all but guarantee to be a hierarchical representation of the data - both for rendering and streaming efficiency (even if it's using Signed Distance Fields, you'd be storing the thing in some type of octree or the like), and you can think of different hiearchies as discrete LOD levels. Refer to how texture LODs work for a simplified representation.
Exactly. Do you agree with me that it also makes more sense to store those LODs on disk in instead of computing them in real time from the bigger mesh? It was a point of discussion some pages back with people saying there wouldn't even be a LOD system.

The estimate Epic gave puts 16Bytes per triangle vs. 8bits for a texel (both compressed)
Do you have a link about where to find these numbers? I was looking for them the other day but didn't find them.
 
Appeal to authority fallacy is when you appeal to someone that's an expert "authority" in a given field and transpose that to a completely different one. A silly but understandable example would be finding a quote of Einstein saying how vaccines cause autism (fake example ok?) and use it to support that narrative. So yeah, while Einstein was a genius level intellect when it comes to physics, he's not a biologist and his comments on the efficacy of vaccines hold no authority in a discussion.

You're welcome :messenger_winking:
From Wiki:

"Some consider that it is a valid inductive argument if all parties of a discussion agree on the reliability of the authority in the given context, while others consider that it is always a fallacy to cite an authority on the debated topic as the primary means of supporting an argument."

K-Pop GIF
 
Status
Not open for further replies.
Top Bottom