• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

LucasBR

Member
You may do like many of us did and spend a good amount of time playing Astro's Playroom. I still havent played a full VR games yet on PSVR, still on the demos, lol.

Enjoy that, and get ready to be blown away by some PS5 games., PS5 versions of games.

Oh I definitely will... I played a few hours yesterday, when I reached the part where's raining and after that there's a hailstorm-like my jaw dropped, I had watched some youtube reviews of the dualsense but it's not the same when you finally test it by yourself. Can't wait to play other games on it.
 
G Godfavor


From the eurogamer article:

You know that during Eurogamer interview Xbox architect Andrew talked about SFS, BCPack and what not regarding SSD. If Andrew Goossen said over 6, surely he was careful with words. Over 6 can mean 6,1, 6,2, 6,3.....if it is 6,8 or 6,9 surely he would say closer to 7. If it is 7, he would say 7 and so on. And therefore, Digital Foundry immediatelly after the interview in their video about XSX mentioned highest number for XSX SSD is 6 GB/s. No need to spin otherwise.

Btw. I'm banned from that thread. Looks like spreading crap is allowed
 

FrankWza

Member
No you twat, it isn't. Some people may feel it's underrated while others may think it's been overhyped.

Looking silly would be "I loves my haptics. It's so awesome! If you don't agree with me, you're just downplaying it and looking silly. Hur dur."

Grow up, and discuss the things you like about it, but don't pretend you're so important as to be any sort of authority figure on... well, anything really.
scared adam sandler GIF by IFC
 

Godfavor

Member
G Godfavor




You know that during Eurogamer interview Xbox architect Andrew talked about SFS, BCPack and what not regarding SSD. If Andrew Goossen said over 6, surely he was careful with words. Over 6 can mean 6,1, 6,2, 6,3.....if it is 6,8 or 6,9 surely he would say closer to 7. If it is 7, he would say 7 and so on. And therefore, Digital Foundry immediatelly after the interview in their video about XSX mentioned highest number for XSX SSD is 6 GB/s. No need to spin otherwise.

Btw. I'm banned from that thread. Looks like spreading crap is allowed
Not a hard number either so it is safe to assume that the max rate is between 6 to 6.5 gb/sec. But it is still in an ideal situation, so the average still is at 4.8gb/sec.
 
Not a hard number either so it is safe to assume that the max rate is between 6 to 6.5 gb/sec. But it is still in an ideal situation, so the average still is at 4.8gb/sec.

Makes sense but I fail to see how the Series I/O is faster. The Sage guy tried to answer that question but he appears to ignore the physical limitations of the drive.

Maybe I'm just struggling to understand how it's possible after the comparisons that we had.
 

Shmunter

Member
Got bad news tho, Quality mode is 30fps and Performance has screen tearing….NOOOOOOO! (Death Vader Noooo)
Got Good news now, the old good al dev console

L1 + R1 + X
Type in vsync (Lowercase) fixes it.

They need to patch it real smart.

Edit: the gfx are quite impressive. No LOD pop in from what I’ve see, something very obvious on last gen version. Perfect 60 too. 👍🏻👍🏻
 
Last edited:

Rea

Member
Makes sense but I fail to see how the Series I/O is faster. The Sage guy tried to answer that question but he appears to ignore the physical limitations of the drive.

Maybe I'm just struggling to understand how it's possible after the comparisons that we had.
He gets his math wrong after getting spinned by Microsoft PR buzzed words multipliers bullshits. The actual math is different story.
 

TrippleA345

Member
I may have missed it, but I took a closer look at the Series S Die Shot. I took IP blocks from the Series X outlined in rectangles for (size) comparison.

KMODJkm.png


What I could see compared to the XB SX is:
GPU uses 2 shader arrays instead of 4;
Each of the two shader arrays has 6 WGPs / 12 CUs instead of 7WGPs / 14 CUs;
L2 cache has been halved from 5MB to 2.5MB;
L1 cache has also been halved;
the ROPS number has been halved from 64 to 32,
Front end is smaller;
MultiMedia Hardware Accelerator (audio, and other things I'm not aware of) has most likely been reduced in size as well;
Parameter Cache halved;
4 Memory Controllers at the bottom, the 5th Memory Controller is somewhere else on the chip;

I couldn't identify the other elements. Maybe someone else can do this or correct it if I have something wrong.
 
There was a random user on Era, saying that the game will run the 60fps mode at 1080p with no RT. A dev from Insomniac quoted the post of that user with a "yikes" emoticon. Dude went ahead and made a whole video about that "leak".

How the heck did the journalist knew what the emote meant? It could have been because what the user was saying was completely wrong.
 
Didn't Cerny say it was equivalent to 9 Zen 2 cores?

I wonder if devs can tap into the power of the IO just as Cerny said they can use the Tempest Engine for additional compute.

That's not how fixed-function hardware works.

It's equivalent to 9 Zen 2 cores BECAUSE it's fixed-function. It cannot be used for much else.

The Tempest Engine is a modified stream processor core. It's a general-purpose unit that is great at audio processing. Not the same thing.
 

PaintTinJr

Member
No, they say developers don't have to author LOD levels. That doesn't mean LOD levels aren't used, it means developers don't have to precreate them. UE5 will automagically create discrete LOD levels from a single mesh and use those when needed.


Epic words are that LOD levels won't be authored anymore, not that they won't be used. Thinking LOD levels or geometry refinement won't be used is absurd. Do you think a 1M mesh that is projected to a single pixel will be rendered at full precision?
I've looked into your much earlier suggestion about nanite probably using signed distance functions (fields volumetric rendering), and it does seem plausible IMO, now I think I understand it.

But from what I understand, SDFs are very much not using polygons for rendering either - unless my topic understanding is way off - so the 1M asset encoded as a combination of many SDFs would actually be rendered in a fragment shader AFAIK and would only ray march a small number of rays per pixel - because the 4 triangles per pixel that Epic mention are presumably the SDF primitive unit that they've procedurally used to encoded the megascanned/atomview assets with, and are not triangle primitives in the way we would normally understand a triangle to mean.

As I understood it, the LODs wouldn't exist for SDF rendering, because the whole point is that the mathematical representation doesn't lack tessellation, in the way that even the finest mesh - say representing a sphere - still would, because polygons are inherently flat, where as fragments(points) are inherently smooth, and the fidelity of the equation output automatically scales to the framebuffer - no more, no less - giving perfectly tessellated geometry.
 

onesvenus

Member
From what I understand, SDFs are very much not using polygons for rendering either - unless my topic understanding is way off - so the 1M asset encoded as a combination of many SDFs would actually be rendered in a fragment shader AFAIK and would only ray march a small number of rays per pixel - because the 4 triangles per pixel that Epic mention are presumably the SDF primitive unit that they've procedurally used to encoded the megascanned/atomview assets with, and are not triangle primitives in the way we would normally understand a triangle to mean.

As I understood it, the LODs wouldn't exist for SDF rendering, because the whole point is that the mathematical representation doesn't lack tessellation, in the way that even the finest mesh - say representing a sphere - still would, because polygons are inherently flat, where as fragments(points) are inherently smooth, and the fidelity of the equation output automatically scales to the framebuffer - no more, no less - giving perfectly tessellated geometry.
You are right that SDF in the mathematical sense have unlimited resolution but almost all the big implementations of it store them discretized in a grid-like volume texture.

The voxel size is what effectively constitutes the LOD level. With a bigger grid size you are going in bigger steps when raymarching but the volume has less resolution.

I used polygonal meshes in the example described in the post you quote because I think folks are more used to polygons and would be easier to understand. And even if Nanite used SDF, those are not really well suited for everything. Animated characters for example would be hard so I'm sure it will be a mixed approach where some meshes are rendered via SDF and others aren't. These last kind of meshes will be using LODs IMHO.

Having said that, I can obviously be wrong. I think it aligns quite well with the restrictions Epics claims about Nanite and with Brian Karis prior research but it might be personal bias about interest for that kind of rendering.

Even if that weren't the case, though, I think we can all agree that "having no LOD system" is bullshit and that traditional LOD methods won't be removed (except for a few simple cases) for the near future. Improving the progressive-meshes-like techniques to further refine a LOD and to make transitions between them almost invisible and not having to author LODs manually, does not mean they won't be used, which is the initial claim I was debunking.

Let me take advantage of this post to ask you something in return: I'd love if the few forum posters with more graphics computing knowledge here like you wouldn't let these misconceptions to be spread. It's great to speculate about what they are doing, but spawning technical discussions with things that are obviously wrong only brings confusion to the table and are only used as fuel for trolling (see all the "R&C is still using traditional LOD systems", "PS5 won't use LODs" or "Xbox can only render 5x less polygons than the PS5" posts across the forum in other threads).
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
You are right that SDF in the mathematical sense have unlimited resolution but almost all the big implementations of it store them discretized in a grid-like volume texture.

The voxel size is what effectively constitutes the LOD level. With a bigger grid size you are going in bigger steps when raymarching but the volume has less resolution.

I used polygonal meshes in the example described in the post you quote because I think folks are more used to polygons and would be easier to understand. And even if Nanite used SDF, those are not really well suited for everything. Animated characters for example would be hard so I'm sure it will be a mixed approach where some meshes are rendered via SDF and others aren't. These last kind of meshes will be using LODs IMHO.

Having said that, I can obviously be wrong. I think it aligns quite well with the restrictions Epics claims about Nanite and with Brian Karis prior research but it might be personal bias about interest for that kind of rendering.

Even if that weren't the case, though, I think we can all agree that "having no LOD system" is bullshit and that traditional LOD methods won't be removed (except for a few simple cases) for the near future. Improving the progressive-meshes-like techniques to further refine a LOD and to make transitions between them almost invisible and not having to author LODs manually, does not mean they won't be used, which is the initial claim I was debunking.

Let me take advantage of this post to ask you something in return: I'd love if the few forum posters with more graphics computing knowledge here like you wouldn't let these misconceptions to be spread. It's great to speculate about what they are doing, but spawning technical discussions with things that are obviously wrong only brings confusion to the table and are only used as fuel for trolling (see all the "R&C is still using traditional LOD systems", "PS5 won't use LODs" or "Xbox can only render 5x less polygons than the PS5" posts across the forum in other threads).
Removing manual LOD authoring (very big win game production side wise) and removing LOD’s completely even only for some parts of the scene (UE5 video was clear that Nanite was not being used for dynamic geometry like player characters) are already big steps forward.

It is not just about the rendering part, but content authoring speed matters a great deal and thus how quickly and thus how often much you as a developer can iterate over the game during production that will help max these consoles out and advance the medium.
 
Last edited:

onesvenus

Member
Removing manual LOD authoring (very big win game production side) and removing LOD’s completely even only for some parts of the scene (UE5 video was clear that Nanite was not being used for dynamic geometry like player characters) are already big steps forward.

It is not just about the rendering part, but content authoring speed matters a great deal and thus how quickly abd thus how often much you as a developer can iterate over the game during production that will help max these consoles out and advance the medium.
Completely agree, I don't think I have ever downplayed what UE5 is doing. Anything that gives developer time to focus on what really matters is a big win.

I'm just trying to debunk the claim that no LODs will be used and that that will be the way going forward. I think I already made my point clear: LODs will keep being used because they bring performance improvements and their downsides can be worked around, like what mip-mapping does on the texturing side. There are multiple things that can be fixed regarding them (transitions betweens different LOD levels being one) and I hope there's progress in there but I think we can all agree that no LODs is never happening, at least in the next 10 years. Even Pixar is using LODs in their movies.
 
Last edited:
Completely agree, I don't think I have ever downplayed what UE5 is doing. Anything that gives developer time to focus on what really matters is a big win.

I'm just trying to debunk the claim that no LODs will be used and that that will be the way going forward. I think I already made my point clear: LODs will keep being used because they bring performance improvements and their downsides can be worked around, like what mip-mapping does on the texturing side. There are multiple things that can be fixed regarding them (transitions betweens different LOD levels being one) and I hope there's progress in there but I think we can all agree that no LODs is never happening, at least in the next 10 years. Even Pixar is using LODs in their movies.
Technically LOD’s will always be used, but you’re thinking of how it works in the old way. The new way from UE5 (and probably Sony studios also have this tech already) forward is that there are no old fashioned lods anymore. The scene will be rendered with a budget and the engine will choose what to render on what pixel, technically creating lods on its own. The difference with now though is that you won’t see any lod changes because they’re subpixel anyway and you don’t have to manually create them anymore saving lots of time and space.

Which is great for the player too because you always think you see the highest detail asset of anything whether that’s technically true or not. Also no more pop in which is maybe even more important 🙃. We’ve seen in the UE5 demo how it works and now it’s just waiting maybe 1 or 2 more years for actual games that are using this new tech completely.

Happy So Excited GIF
 
Last edited:

onesvenus

Member
The new way from UE5 (and probably Sony studios also have this tech already) forward is that there are no old fashioned lods anymore. The scene will be rendered with a budget and the engine will choose what to render on what pixel, technically creating lods on its own
See? This is what I'm against. What is the opposite of "old fashioned lods"?
Engines have always choosed what LOD used. Them being created on the fly or not do not equal to LODs not being used. Them being progressively refined is also nothing new, the progressive meshes paper was published in 1996 and has been used extensively since then.

Let me write down an example, again. Imagine we have a 1M polygon mesh which is drawn twice, one near the camera and another one far away, spanning a couple of pixels. Do you really think it makes sense to derive the second one from the first one at runtime? It doesn't. You will be wasting cycles in simplifying something that you could precompute. No LOD authoring means developers won't have to create them but I'm sure UE5 will create simplified versions of meshes prior to execution and use those when needed. Even if you could do it in realtime, there are more important places to waste cycles on. And again, this doesn't mean you'll see LOD transitions, you can work around that without removing LODs entirely.
 
Last edited:

Shmunter

Member
See? This is what I'm against. What is the opposite of "old fashioned lods"?
Engines have always choosed what LOD used. Them being created on the fly or not do not equal to LODs not being used. Them being progressively refined is also nothing new, the progressive meshes paper was published in 1996 and has been used extensively since then.

Let me write down an example, again. Imagine we have a 1M polygon mesh which is drawn twice, one near the camera and another one far away, spanning a couple of pixels. Do you really think it makes sense to derive the second one from the first one at runtime? It doesn't. You will be wasting cycles in simplifying something that you could precompute. No LOD authoring means developers won't have to create them but I'm sure UE5 will create simplified versions of meshes prior to execution and use those when needed. Even if you could do it in realtime, there are more important places to waste cycles on. And again, this doesn't mean you'll see LOD transitions, you can work around that without removing LODs entirely.
Solid debate. Would like to see if auto LOD scaling is possible. Absolutely have no idea if things like mesh shaders etc. make scaling down from a single source feasible and cheap.

Keeping in mind, today there is also logic involved in deciding which LOD at which time to use and possibly even overhead in fetching it if not readily in ram. Cache hit implications etc.
 
Last edited:

assurdum

Banned
There was a random user on Era, saying that the game will run the 60fps mode at 1080p with no RT. A dev from Insomniac quoted the post of that user with a "yikes" emoticon. Dude went ahead and made a whole video about that "leak".
We can have a link? 1080p for 60 FPS without raytracing seems extremely exaggerated.
 
Last edited:

onesvenus

Member
Would like to see if auto LOD scaling is possible
It is, the paper I was referring from 1996 defined a set of operators to do exactly that. It has been used extensively. It's nothing new.

Keeping in mind, today there is also logic involved in deciding which LOD at which time to use and possibly even overhead in fetching it if not readily in ram. Cache hit implications etc.
The logic involved in deciding which LOD level to use will still need to be there even if LODs are created as needed. That's because you'll want to know which is your simplification target.

And yes, there's overhead in fetching them if not in RAM but if it hasn't been a problem until now, it won't be a problem with the I/O bandwidth of the new consoles.

But see? As the I/O bandwith grows, the case of precomputing LODs and fetching them to RAM when needed gains weight. Using a single mesh available in memory and spend cycles simplifying it made more sense when requesting discrete LODs from disk was more expensive due to limited bandwith or disk space. The cicles you spend doing something that can be precomputed are better spent on something else (refining the discrete LODs progressively as needed to minimize pop-in for example)

Again, the better example is Mip mapping. Games could use textures at full resolution and downsample them when needed but they don't because you can precompute them and spend your power on something else far more interesting. Also, take into account that downsampling a texture is a much cheaper operation than simplifying geometry. But again, games are not doing that and there's no talk about removing texture LODs AFAIK.
 
Last edited:

Shmunter

Member
It is, the paper I was referring from 1996 defined a set of operators to do exactly that. It has been used extensively. It's nothing new.


The logic involved in deciding which LOD level to use will still need to be there even if LODs are created as needed. That's because you'll want to know which is your simplification target.

And yes, there's overhead in fetching them if not in RAM but if it hasn't been a problem until now, it won't be a problem with the I/O bandwidth of the new consoles.

But see? As the I/O bandwith grows, the case of precomputing LODs and fetching them to RAM when needed gains weight. Using a single mesh available in memory and spend cycles simplifying it made more sense when requesting discrete LODs from disk was more expensive due to limited bandwith or disk space. The cicles you spend doing something that can be precomputed are better spent on something else (refining the discrete LODs progressively as needed to minimize pop-in for example)

Again, the better example is Mip mapping. Games could use textures at full resolution and downsample them when needed but they don't because you can precompute them and spend your power on something else far more interesting. Also, take into account that downsampling a texture is a much cheaper operation than simplifying geometry. But again, games are not doing that and there's no talk about removing texture LODs AFAIK.
All makes sense, but don’t gpu’s have dedicated hardware to do some of the heavy lifting now?

The speed of the i/o also makes sense in the opposite direction. If you’re no longer reserving as much for things outside of the viewport and can bring in the highest quality with speed, do it.

Pure speculation.
 

onesvenus

Member
don’t gpu’s have dedicated hardware to do some of the heavy lifting now?
To do mesh simplification? No, they don't. You can do it with shaders but there's no fixed function to do it.
The speed of the i/o also makes sense in the opposite direction. If you’re no longer reserving as much for things outside of the viewport and can bring in the highest quality with speed, do it.
Yup, only taking the I/O into account but then you have to compute the simplification.

Anyway, let's wait until UE5 is released. Can't wait to see how LOD management is done.
 
Basically he said the 60FPs mode was 1080P without any proof. I'm not saying that it isn't possible but as a journalist he needs to provide proof with his claims.

Isn't Miles Morales and Spider-Man almost native 4K at 60fps? Even the RT 60fps mode had the game running at resolution higher than 1080p. Why would he sound so preposterous
 

IntentionalPun

Ask me about my wife's perfect butthole
No, they'll do a combination of both. They'll do real time simplification between discrete LOD levels.

While I get what you are saying; I still don't understand how/why you and the other person you quoted speak as though you know for a fact how it will work.

But I do get it; and what you are saying makes some sense. Multiple pre-calculated LODs that you then scale from.

But we'll see I guess.

The problem with that technique "making sense" is we are talking insanely detailed models (they scale down to ~4k precision models for disk storage, but that's still massive).. and one of the bigger limitations is disk space. Even with compression halving space usage over last gen for a given model/texture or more... the numbers quoted are many times more than the compression advancements.. then you store multiple LOD levels? Games would be insanely large unless this technique is used sparingly for special set piece areas or something. Which has been the biggest question regarding UE5 in general.
 
Last edited:
See? This is what I'm against. What is the opposite of "old fashioned lods"?
Engines have always choosed what LOD used. Them being created on the fly or not do not equal to LODs not being used. Them being progressively refined is also nothing new, the progressive meshes paper was published in 1996 and has been used extensively since then.

Let me write down an example, again. Imagine we have a 1M polygon mesh which is drawn twice, one near the camera and another one far away, spanning a couple of pixels. Do you really think it makes sense to derive the second one from the first one at runtime? It doesn't. You will be wasting cycles in simplifying something that you could precompute. No LOD authoring means developers won't have to create them but I'm sure UE5 will create simplified versions of meshes prior to execution and use those when needed. Even if you could do it in realtime, there are more important places to waste cycles on. And again, this doesn't mean you'll see LOD transitions, you can work around that without removing LODs entirely.
I think we tell the same story here, maybe I didn’t use the right words for you though 🙂. I also say there’ll be lod but decided by the engine, and on a sub pixel level so you won’t ever see it as a gamer.
 

muteZX

Banned
ad LOD, pop up etc .. watching 1080p shit YT IQ ..

Ratchet PS5

9:37 - grass pop up /middle centre part of the screen around the rock/
10:25 - abrupt LOD change /middle centre right, branches, bushes round tree/
12:45 - shadow pop up /middle centre, on da road/
 
ad LOD, pop up etc .. watching 1080p shit YT IQ ..

Ratchet PS5

9:37 - grass pop up /middle centre part of the screen around the rock/
10:25 - abrupt LOD change /middle centre right, branches, bushes round tree/
12:45 - shadow pop up /middle centre, on da road/

I think that's proof that the footage wasn't faked. A render farm wouldn't leave those imperfections. If anything it's good news for PS5 owners.
 
I think that's proof that the footage wasn't faked. A render farm wouldn't leave those imperfections. If anything it's good news for PS5 owners.
Of course the gameplay footage isn’t faked oh my goodness what some people believe lol. Ratchet makes use of traditional lod so yes there’ll be some pop in and lod transitions noticeable 🙂. For the rest, Pixar movie 😉
 
Status
Not open for further replies.
Top Bottom