• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Unreal Engine 5.0 OUT NOW

Keihart

Member
is nanite supported on animated objects like characters now ? i haven't kept up with it since the beta and that was a limitation IIRC.
 

CamHostage

Member
is nanite supported on animated objects like characters now ? i haven't kept up with it since the beta and that was a limitation IIRC.

I don't believe there were any significant improvements or newly-supported functions specifically for Nanite in UE 5.0. (At least, no improvements that gamers would quantify as something they'd be excited to see active in games; Nanite has seen a lot of improvement since Early Access last year, including a number of optimization and compression features; techies can read up on these details in the UE 5.0 Release Notes.)

A developer can still have moving objects in UE5 with Nanite enabled (the cars in Matrix Awakens are Nanite until they crash, for example, and the robot in Valley of the Ancients was Nanite,) and you can also have some general physics applied to Nanite objects including smashing the object (provided you pre-fracture the object and "glue it together" with a geometry collection so that it can be broken later.) What you cannot have is deformation of a Nanite mesh still, as it only supports rigid objects. So you can't have a character bending or squashing, with their skin and clothes adjusting to movement. To do that, you would employ other character animation techniques.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
I upload the file. Now everyone can download it easier.

Just downloaded this. I cant change the resolution because I cant find the Saved folder that has the config files. For whatever reason, the resolution is really low. Looks like 720p. My tv resolution is native 4k. I changed to 1440p and it is still the same. Really blurry.

EDIT: Found the config file in the C:/user directory but now matter how much i change the resolution settings, it wont affect the ingame resolution. what am i doing wrong?

[ScalabilityGroups]
sg.ResolutionQuality=100
sg.ViewDistanceQuality=3
sg.AntiAliasingQuality=3
sg.ShadowQuality=3
sg.GlobalIlluminationQuality=3
sg.ReflectionQuality=3
sg.PostProcessQuality=3
sg.TextureQuality=3
sg.EffectsQuality=3
sg.FoliageQuality=3
sg.ShadingQuality=3

[/Script/Engine.GameUserSettings]
bUseVSync=False
bUseDynamicResolution=False
ResolutionSizeX=3840
ResolutionSizeY=2160
LastUserConfirmedResolutionSizeX=3840
LastUserConfirmedResolutionSizeY=2160
WindowPosX=-1
WindowPosY=-1
FullscreenMode=1
LastConfirmedFullscreenMode=1
PreferredFullscreenMode=1
Version=5
AudioQualityLevel=0
LastConfirmedAudioQualityLevel=0
FrameRateLimit=0.000000
DesiredScreenWidth=3840
bUseDesiredScreenHeight=False
DesiredScreenHeight=2160
LastUserConfirmedDesiredScreenWidth=3840
LastUserConfirmedDesiredScreenHeight=2160
LastRecommendedScreenWidth=-1.000000
LastRecommendedScreenHeight=-1.000000
LastCPUBenchmarkResult=-1.000000
LastGPUBenchmarkResult=-1.000000
LastGPUBenchmarkMultiplier=1.000000
bUseHDRDisplayOutput=False
HDRDisplayOutputNits=1000

[ShaderPipelineCache.CacheFile]
LastOpened=CitySample
 
Last edited:

vpance

Member
Just downloaded this. I cant change the resolution because I cant find the Saved folder that has the config files. For whatever reason, the resolution is really low. Looks like 720p. My tv resolution is native 4k. I changed to 1440p and it is still the same. Really blurry.

EDIT: Found the config file in the C:/user directory but now matter how much i change the resolution settings, it wont affect the ingame resolution. what am i doing wrong?

[ScalabilityGroups]
sg.ResolutionQuality=100
sg.ViewDistanceQuality=3
sg.AntiAliasingQuality=3
sg.ShadowQuality=3
sg.GlobalIlluminationQuality=3
sg.ReflectionQuality=3
sg.PostProcessQuality=3
sg.TextureQuality=3
sg.EffectsQuality=3
sg.FoliageQuality=3
sg.ShadingQuality=3

[/Script/Engine.GameUserSettings]
bUseVSync=False
bUseDynamicResolution=False
ResolutionSizeX=3840
ResolutionSizeY=2160
LastUserConfirmedResolutionSizeX=3840
LastUserConfirmedResolutionSizeY=2160
WindowPosX=-1
WindowPosY=-1
FullscreenMode=1
LastConfirmedFullscreenMode=1
PreferredFullscreenMode=1
Version=5
AudioQualityLevel=0
LastConfirmedAudioQualityLevel=0
FrameRateLimit=0.000000
DesiredScreenWidth=3840
bUseDesiredScreenHeight=False
DesiredScreenHeight=2160
LastUserConfirmedDesiredScreenWidth=3840
LastUserConfirmedDesiredScreenHeight=2160
LastRecommendedScreenWidth=-1.000000
LastRecommendedScreenHeight=-1.000000
LastCPUBenchmarkResult=-1.000000
LastGPUBenchmarkResult=-1.000000
LastGPUBenchmarkMultiplier=1.000000
bUseHDRDisplayOutput=False
HDRDisplayOutputNits=1000

[ShaderPipelineCache.CacheFile]
LastOpened=CitySample

Try right clicking the shortcut and 'Run as Administrator'. Worked for me when I had this issue on my build.
 

SlimySnake

Flashless at the Golden Globes
Try right clicking the shortcut and 'Run as Administrator'. Worked for me when I had this issue on my build.
Just tried that. No luck.

EDIT: Got it to work. The windows scaling settings for text and layout were messing with it. Game runs at 24 fps at 1440p on my 2080. With DLSS I can get 40-48 fps when driving around. Very impressive, but it doesnt look as good as native.

I might be going crazy but i think this looked better on my console. I will download it on my PS5 tonight.
 
Last edited:

sertopico

Member
Just tried that. No luck.

EDIT: Got it to work. The windows scaling settings for text and layout were messing with it. Game runs at 24 fps at 1440p on my 2080. With DLSS I can get 40-48 fps when driving around. Very impressive, but it doesnt look as good as native.

I might be going crazy but i think this looked better on my console. I will download it on my PS5 tonight.
You'll find the INI files somewhere else, namely here: C:\Users\Username\AppData\Local\CitySample\Saved\Config\Windows

This new compiled version had the resolution set at 1080p.

edit: I misread the post. Forget it. :D
 
Last edited:

yamaci17

Member
wow, this new version really runs way better. BUT DLSS also looks pretty bad still. this has to be either an issue with Lumen or with motion vectors I assume.
it looks better than TSR, but not by much

here TSR:
citysample12.04.202204fki0.png


and here with DLSS Quality:
citysample12.04.20220mqke5.png


notice the RIDICULOUS performance boost with DLSS quality over TSR while also looking better in motion than the TSR results!
literally a 35% increase in performance... like... damn...


my setup again:
Ryzen 5600X
RTX 3060Ti (TUF Gaming)
16GB DDR4 @3200mhz

Dell Monitor: 1440p 144hz


EDIT:

I tried getting some matched motion shots. since this has unavoidable camera motion blur, I had to find a spot where I can exactly time a screenshot while walking sideways, and HOLY HELL, I didn't expect I could line it up so well! LOL


I'm not gonna tell you which is which, one is TSR the other DLSS Quality. like I said, I think neither are properly implemented here as the amount of artifacting on display is crazy.

If you want to find out which is which you can look in the URL names, one is called "citysampledlssmotion2skbo.png and the other citysampletsrmotion2mkq5.png
so if you wanna see if you can tell which is which, and then see if you are right, just look at the url :)
citysampletsrmotion2mkq5.png

citysampledlssmotion2skbo.png


again, how crazy well did I line this up? FIRST TRY TOO! xD I mean I was pretty good at Guitar Hero back in the day, maybe my timing is still trained from that lol
in both shots I walked all the way right against the wall first, and then tried to time my screenshot exactly in the moment that rail in the back left overlapped with the left lamp post

Specular highlights look cleaner imo with DLSS, but both have bad artifacts. it seems like DLSS almost acts like a secondary denoiser on top of the normal RT denoising going on, resulting in less flicker and less specular shimmering

also with DLSS your character has less of a trail behind her compared to TSR, where it looks like heat distortion or some shit behind her lol
disable motion blur

add to engine.ini

[SystemSettings]
r.MotionBlurQuality=0
 

yamaci17

Member
as expected, 2700 clocked at 3.7 ghz (not even the usual 4.1 ghz) outperforms consoles in the new build

%100 crowd, car and park density, mostly 30+ fps at all times(aside from crashes or high speed), unlike consoles which can only hold 30 fps with these at %50 densities


 
Last edited:

DenchDeckard

Moderated wildly
It was always obvious that PC will wipe the floor with console. Especially when it gets even more optimisation. There was never a doubt that consoles could really compete with high end PCs. Even with the magic Cerny IO. by this Xmas consoles are going to be left even more in the dust and the generation hasn't really even started to get going yet. Covid has buggered us.

I still love my consoles and play them every day but we were all living in some dream world tricked by clever marketing. PC will continue to deliver higher resolutions, higher fps and higher settings just like it has always been over the last 30 years.
 
Last edited:

sendit

Member
as expected, 2700 clocked at 3.7 ghz (not even the usual 4.1 ghz) outperforms consoles in the new build

%100 crowd, car and park density, mostly 30+ fps at all times(aside from crashes or high speed), unlike consoles which can only hold 30 fps with these at %50 densities



Consoles frame rate was capped at 30.
 
This UE5 Sample made me realize.

CPUs are a huge bottleneck for modern game techs.
That looks even worst when people hyped the weak mobile Zen 2 CPUs in next-gen consoles.
Like MS has said, decompression requires multiple CPU cores, so HW decompression (RTX I/O) should free up CPU resources in the future. I bet XSX/PS5 versions already use HW decompression, because performance is much better.
 

PaintTinJr

Member
Just tried that. No luck.

EDIT: Got it to work. The windows scaling settings for text and layout were messing with it. Game runs at 24 fps at 1440p on my 2080. With DLSS I can get 40-48 fps when driving around. Very impressive, but it doesnt look as good as native.

I might be going crazy but i think this looked better on my console. I will download it on my PS5 tonight.
No I don't think you are. It looks very UE4 in that pre-compiled demo, even if it still looks a halfway house on XsX/PS5 of UE4/UE5 - so will probably have to download the engine and sample from Epic and try in the Editor to check.

Lots of things look off. car contact with roads looks a little floaty, far draw distance detail looks non-GI'ed and the reflections didn't look correct by angles - so I'm guessing they are just cube mapped reflections in the compiled demo.

It also appears it is CPU clock dependent for performance, because I've had a RAM module stability issue with one of my 8x DDR3 sticks in quad channel memory mode, and have temporarily disabled the speedstep on my Xeon, so at 2.7GHz base clock it hits frame-rate and stops it going above 26fps with my RTX 3060, with or without DLSS., obviously the 30MB of shared L3 cache, etc means that it is unlikely CPU cache bound on my CPU.
 

ethomaz

Banned
Alex DF shared a pic comparing the Lumen SW RT vs HW RT.

Software Lumen totally offers offscreen reflections - but not of certain types (no skinned meshes, for example). They are also dramatically lower quality.

This is a rough shot and my shaders are still compiling for software lumen as you can see on the coat (this takes like 5 hours?) but this is the gist.
software.00_00_07_30.fxj51.png

Welcome to PS2 or PS3 reflections without hardware RT :D
 
Last edited:

SlimySnake

Flashless at the Golden Globes

OMG. I never saw this. I am glad this guy used the same fast car to show off the PS5 vs XSX comparison because Alex the little shit used a fast car in his PS5 comparison and a slow car in the XSX benchmark. Clearly the framerate drops are directly tied to how fast you are going, how much you are crashing and this video shows that. People laughed at me when I pointed this out back then. Dumb dumb dumb.

I am surprised to see that the PS5 and XSX are running at 1620p with 1404p being the common resolution. DF made me think it was 1080p for whatever reason. Is it 1404p upscaled by TSR from a base resolution of 1080p or or is it native 1404p upscaled to 4k using TSR?

No I don't think you are. It looks very UE4 in that pre-compiled demo, even if it still looks a halfway house on XsX/PS5 of UE4/UE5 - so will probably have to download the engine and sample from Epic and try in the Editor to check.

Lots of things look off. car contact with roads looks a little floaty, far draw distance detail looks non-GI'ed and the reflections didn't look correct by angles - so I'm guessing they are just cube mapped reflections in the compiled demo.

It also appears it is CPU clock dependent for performance, because I've had a RAM module stability issue with one of my 8x DDR3 sticks in quad channel memory mode, and have temporarily disabled the speedstep on my Xeon, so at 2.7GHz base clock it hits frame-rate and stops it going above 26fps with my RTX 3060, with or without DLSS., obviously the 30MB of shared L3 cache, etc means that it is unlikely CPU cache bound on my CPU.

Reflections on cars were the first thing that jumped out at me. I remember being blown away by those reflections on PS5, but i figured it was just in my head.

My i7-11700k has a very high clock that can go up to 5.1 Ghz but mostly tops out at 4.8 ghz, but I still wasnt able to go over 40 fps consistently no matter how much i reduced the resolution. It is super weird.
 
Last edited:

yamaci17

Member
OMG. I never saw this. I am glad this guy used the same fast car to show off the PS5 vs XSX comparison because Alex the little shit used a fast car in his PS5 comparison and a slow car in the XSX benchmark. Clearly the framerate drops are directly tied to how fast you are going, how much you are crashing and this video shows that. People laughed at me when I pointed this out back then. Dumb dumb dumb.

I am surprised to see that the PS5 and XSX are running at 1620p with 1404p being the common resolution. DF made me think it was 1080p for whatever reason. Is it 1404p upscaled by TSR from a base resolution of 1080p or or is it native 1404p upscaled to 4k using TSR?
it has to be 1440p to 4K TSR, details are well preserved despite the lower bound of resolutions. LODs/details are at 4K

TSR and DLSS has a big trick actually that has nothing to do with upscaling, AI and stuff. Details and certain LODs are remained at native 4k. that is why both 4K DLSS Quality and 4K TSR (with 1440p internal) have a huge overhead over native 1440p rendering. And thats how they provide 4K-like image quality.

Its practically a smart filter that reduces the resolution of stuff you mostly won't notice (my guess is shader resolution, shadow, reflection and stuff)

of course these are my guesses. i dont think neither dlss nor tsr does something magical to get "better" IQ over native 1440p.

so yes, in my assumption, consoles run it at full 4K (base resolution) and render resolutin of 1300-1500p and then TSR comes into play.

i currently have no idea how TSR operates on PC demo since there are no options to tweak it (or even if there is, i have no idea how it operates). TSR is not a thing you just "enable". there has to be TSR modes but I have no clue as to what is the default of it. on consoles, its probably bundled with a dynamic resolution solution that ties the resolution to the 30 FPS target


TSR over TAA at 1080p looks better. I'd assume enabling TSR on this demo simply swaps TAA with TSR. It has to be DLAA-like because GPU usage is upped and clearly TSR preserves fine detail better than native TAA. so it cannot be lowering resolution, unless you find a way to specify the lower resolution
 
Last edited:

winjer

Gold Member
OMG. I never saw this. I am glad this guy used the same fast car to show off the PS5 vs XSX comparison because Alex the little shit used a fast car in his PS5 comparison and a slow car in the XSX benchmark. Clearly the framerate drops are directly tied to how fast you are going, how much you are crashing and this video shows that. People laughed at me when I pointed this out back then. Dumb dumb dumb.

I am surprised to see that the PS5 and XSX are running at 1620p with 1404p being the common resolution. DF made me think it was 1080p for whatever reason. Is it 1404p upscaled by TSR from a base resolution of 1080p or or is it native 1404p upscaled to 4k using TSR?

I had never noticed that in Alex's test. But it's an important error, that clearly affects results between the PS5 and Xbox Series X.
It's somewhat frequent to see him screw up benchmarks in similar ways. Either for negligence or purposefully.
 

Dampf

Member
OMG. I never saw this. I am glad this guy used the same fast car to show off the PS5 vs XSX comparison because Alex the little shit used a fast car in his PS5 comparison and a slow car in the XSX benchmark. Clearly the framerate drops are directly tied to how fast you are going, how much you are crashing and this video shows that. People laughed at me when I pointed this out back then. Dumb dumb dumb.
Yeah pretend like he did that on purpose to show PS in a bad light.

Are you for real?
 

SlimySnake

Flashless at the Golden Globes
it has to be 1440p to 4K TSR, details are well preserved despite the lower bound of resolutions. LODs/details are at 4K

TSR and DLSS has a big trick actually that has nothing to do with upscaling, AI and stuff. Details and certain LODs are remained at native 4k. that is why both 4K DLSS Quality and 4K TSR (with 1440p internal) have a huge overhead over native 1440p rendering. And thats how they provide 4K-like image quality.

Its practically a smart filter that reduces the resolution of stuff you mostly won't notice (my guess is shader resolution, shadow, reflection and stuff)

of course these are my guesses. i dont think neither dlss nor tsr does something magical to get "better" IQ over native 1440p.

so yes, in my assumption, consoles run it at full 4K (base resolution) and render resolutin of 1300-1500p and then TSR comes into play.

i currently have no idea how TSR operates on PC demo since there are no options to tweak it (or even if there is, i have no idea how it operates). TSR is not a thing you just "enable". there has to be TSR modes but I have no clue as to what is the default of it. on consoles, its probably bundled with a dynamic resolution solution that ties the resolution to the 30 FPS target


TSR over TAA at 1080p looks better. I'd assume enabling TSR on this demo simply swaps TAA with TSR. It has to be DLAA-like because GPU usage is upped and clearly TSR preserves fine detail better than native TAA. so it cannot be lowering resolution, unless you find a way to specify the lower resolution
How comes your GPU utilization is at 54-58%? I noticed that in your video as well. I guess your 2700 CPU is the bottleneck here. Thats the one NX gamer uses in his console comparisons and it always holds back the GPU.
 

yamaci17

Member
How comes your GPU utilization is at 54-58%? I noticed that in your video as well. I guess your 2700 CPU is the bottleneck here. Thats the one NX gamer uses in his console comparisons and it always holds back the GPU.

of course, the GPUis underutilized due to CPU, it's a 3070 at 1080p resolution. clearly the demo is CPU bound near 25-30 FPS for my CPU. if anything, you can make the same argument for consoles. they're not dropping frames because of their GPU but because of their CPU. by that same virtue, I can get 4k 30 fps, which I would in this situation (since there's no point of not utilizing the GPU with a higher resolution input).

5800x, which is %50 faster than my cpu only produces %50 more frames (so, 40-45) and that would still leave the 3070 underutilized at 1080p (with %75-80 gpu usage) i usually play at high resolutions instead (supersampling) with ray tracing, that's why my build is unbalanced at 1080p

we shall see if they will optimize the engine for 60 fps in the end. i'd say they have to, most console folks nowadays are now accustomed to smooth 60 fps modes. if the engine is not capable of pushing 60 frames on 3.5 ghz zen 2 chip, then the future is doomed, considering devs are abandoning their engines for UE5. even the newest 5800x 3Dcache cpu barely pushing %50-60 over zen 2 chips. surely, CPU demands should get lowered in the future with extra optimizations within the engine. i can just swap in a cheap 5600x in the future of course, but thats another story (in which case, still wouldn't be enough for a consistent, smooth 60 frames). for now, there are are no games where i cannot get 60 fps reliably aside from this demo (and maybe star citizen and flight sim, but i dont play those games) that's my only expectation from my CPU in my rig. i dont chase high framerates unless its doom eternal or rainbow six, in which case, i can also reliably get 144 fps thankfully
 

winjer

Gold Member
it has to be 1440p to 4K TSR, details are well preserved despite the lower bound of resolutions. LODs/details are at 4K

TSR and DLSS has a big trick actually that has nothing to do with upscaling, AI and stuff. Details and certain LODs are remained at native 4k. that is why both 4K DLSS Quality and 4K TSR (with 1440p internal) have a huge overhead over native 1440p rendering. And thats how they provide 4K-like image quality.

It's not a trick, but rather a necessity. A lot of LODs are defined by resolution, simply because a higher pixel count can show more detail. It would be pointless to upscale an image if these LODs are not adjusted according to the final resolution.
The best example of this with upscaling techs is MipMaps lod bias. This setting is important to load the correct level for MipMaped textures at the correct distance, otherwise, they will look blurrier.
We can adjust the MipMap level for textures, using lower resolution. But most detail of those textures will be wasted because there are so few pixels to show them. It will also increase vram usage. And it might cause shimmering in textures if the value for the Bias is too high.

Its practically a smart filter that reduces the resolution of stuff you mostly won't notice (my guess is shader resolution, shadow, reflection and stuff)

It's not a filter, but rather a stage in the rendering pipeline that requires a lot of information to be done correctly.
It has to access several rendering buffers, such as depth, color and motion, to be able to reconstruct an image.

As I said above, a lot of graphical features are tied to screen resolution. Depending on the engine, this can be tesselation, volumetric fog, MipaMaps, etc.
But the main advantage of upscaling, is that there are fewer fragments to shade.

of course these are my guesses. i dont think neither dlss nor tsr does something magical to get "better" IQ over native 1440p.

No, it's not magic. It's sampling pixels, in a jittered form, across a number of previous frames, to calculate pixels around the originals.

so yes, in my assumption, consoles run it at full 4K (base resolution) and render resolutin of 1300-1500p and then TSR comes into play.

It's not rendered at 4K. It's rendered at a lower resolution and then extrapolating the values for other neighboring pixel, based on temporal accumulation.

i currently have no idea how TSR operates on PC demo since there are no options to tweak it (or even if there is, i have no idea how it operates). TSR is not a thing you just "enable". there has to be TSR modes but I have no clue as to what is the default of it. on consoles, its probably bundled with a dynamic resolution solution that ties the resolution to the 30 FPS target

Here are all of the commands for TSR in this current build of UE5.
  1. r.TSR.Debug.SetupExtraPasses
  2. r.TSR.History.R11G11B10
  3. r.TSR.History.ScreenPercentage
  4. r.TSR.History.UpdateQuality
  5. r.TSR.RejectionAntiAliasingQuality
  6. r.TSR.ShadingRejection.HalfRes
  7. r.TSR.ShadingRejection.SpatialFilter
  8. r.TSR.Translucency.EnableResponiveAA
  9. r.TSR.Translucency.HighlightLuminance
  10. r.TSR.Translucency.SeparateTemporalAccumulation
  11. r.TSR.Velocity.Extrapolation
  12. r.TSR.Velocity.HoleFill
  13. r.TSR.Velocity.HoleFill.MaxScatterVelocity
  14. r.TSR.Velocity.WeightClampingPixelSpeed

To disable TSR and TAA:
r.AntiAliasingMethod


TSR over TAA at 1080p looks better. I'd assume enabling TSR on this demo simply swaps TAA with TSR. It has to be DLAA-like because GPU usage is upped and clearly TSR preserves fine detail better than native TAA. so it cannot be lowering resolution, unless you find a way to specify the lower resolution

It's not DLAA.
But of you want to do some kind of SuperSampling with TSR, you can use this command.
  1. r.TSR.History.ScreenPercentage=200
 
Last edited:

yamaci17

Member
It's not a trick, but rather a necessity. A lot of LODs are defined by resolution, simply because a higher pixel count can show more detail. It would be pointless to upscale an image if these LODs are not adjusted according to the final resolution.
The best example of this with upscaling techs is MipMaps lod bias. This setting is important to load the correct level for MipMaped textures at the correct distance, otherwise, they will look blurrier.
We can adjust the MipMap level for textures, using lower resolution. But most detail of those textures will be wasted because there are so few pixels to show them. It will also increase vram usage. And it might cause shimmering in textures if the value for the Bias is too high.



It's not a filter, but rather a stage in the rendering pipeline that requires a lot of information to be done correctly.
It has to access several rendering buffers, such as depth, color and motion, to be able to reconstruct an image.

As I said above, a lot of graphical features are tied to screen resolution. Depending on the engine, this can be tesselation, volumetric fog, MipaMaps, etc.
But the main advantage of upscaling, is that there are fewer fragments to shade.



No, it's not magic. It's sampling pixels, in a jittered form, across a number of previous frames, to calculate pixels around the originals.



It's not rendered at 4K. It's rendered at a lower resolution and then extrapolating the values for other neighboring pixel, based on temporal accumulation.



Here are all of the commands for TSR in this current build of UE5.
  1. r.TSR.Debug.SetupExtraPasses
  2. r.TSR.History.R11G11B10
  3. r.TSR.History.ScreenPercentage
  4. r.TSR.History.UpdateQuality
  5. r.TSR.RejectionAntiAliasingQuality
  6. r.TSR.ShadingRejection.HalfRes
  7. r.TSR.ShadingRejection.SpatialFilter
  8. r.TSR.Translucency.EnableResponiveAA
  9. r.TSR.Translucency.HighlightLuminance
  10. r.TSR.Translucency.SeparateTemporalAccumulation
  11. r.TSR.Velocity.Extrapolation
  12. r.TSR.Velocity.HoleFill
  13. r.TSR.Velocity.HoleFill.MaxScatterVelocity
  14. r.TSR.Velocity.WeightClampingPixelSpeed

To disable TSR and TAA:
r.AntiAliasingMethod



It's not DLAA.
But of you want to do some kind of SuperSampling with TSR, you can use this command.
  1. r.TSR.History.ScreenPercentage=200

i haven't said DLSS / TSR is magic. read it again, correctly. i said, "i don't think neither TSR nor DLSS does something magical".

by filter, i meant a literal, developer driven filter, not a rendering filter. like, you filter stuff and classify them and downgrade resolutions based on that filter (do's and dont's). this varies. its not a static thing. take rdr 2 for example, its the game where dlss brings the least performance improvement, clearly devs mostly filtered out most of the rendering stuff and only a handful of stuff are actually being lowered. by that virtue, dlss quality even at 1440p looks very similar to 1440p but with minimal performance gain (aside from the horrible oversharpening effect)

by 4k rendering, i meant output of course, not actual resolution (should've stated it better). output being 4k is important, here's why; 4K+DLSS performance (internal 1080p) looks MILES better than 1620p+DLSS Quality (internal 1080p). having that high, sweet 4k base resolution is very crucial to produce pristine, clean image quality. i have literally said "and render resolution of 1300-1500p "


and in the end, it's still a trick. just because it is necessary does not mean that its not a trick. if it was necessity really, devs would've implemented for years. only with DLSS this "necessity" is started being used. and suddenly TSR and xess comes up and i bet they use the same trick as well. its a trick to me, it may be a necessity to you. in the end, most people believe its some magic jumbo mumbo, which is not, which we both agree upon.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
of course, the GPUis underutilized due to CPU, it's a 3070 at 1080p resolution. clearly the demo is CPU bound near 25-30 FPS for my CPU. if anything, you can make the same argument for consoles. they're not dropping frames because of their GPU but because of their CPU. by that same virtue, I can get 4k 30 fps, which I would in this situation (since there's no point of not utilizing the GPU with a higher resolution input).

5800x, which is %50 faster than my cpu only produces %50 more frames (so, 40-45) and that would still leave the 3070 underutilized at 1080p (with %75-80 gpu usage) i usually play at high resolutions instead (supersampling) with ray tracing, that's why my build is unbalanced at 1080p

we shall see if they will optimize the engine for 60 fps in the end. i'd say they have to, most console folks nowadays are now accustomed to smooth 60 fps modes. if the engine is not capable of pushing 60 frames on 3.5 ghz zen 2 chip, then the future is doomed, considering devs are abandoning their engines for UE5. even the newest 5800x 3Dcache cpu barely pushing %50-60 over zen 2 chips. surely, CPU demands should get lowered in the future with extra optimizations within the engine. i can just swap in a cheap 5600x in the future of course, but thats another story (in which case, still wouldn't be enough for a consistent, smooth 60 frames). for now, there are are no games where i cannot get 60 fps reliably aside from this demo (and maybe star citizen and flight sim, but i dont play those games) that's my only expectation from my CPU in my rig. i dont chase high framerates unless its doom eternal or rainbow six, in which case, i can also reliably get 144 fps thankfully
Thats very interesting. ive heard horrid things about that CPU and a lot of NX gamer's benchmarks have been dismissed because hes used that CPU. I actually think that CPU should be used to compare PC GPUs to the PS5 GPU if that is the intention because clearly the PS5 CPU is likely holding back the PS5 GPU so any PC vs PS5 comparisons using fancy i9-12000k CPUs are a bit unfair. Although, in PC vs PC comparison, he should not be using the CPU against another system with a 3600 when doing GPU comparisons which he did back in his Deathloop review.

And yeah, the CPU performance in this demo needs to be way better than it is at the moment. Reducing resolution and giving GPU more headroom does fuck all because its CPU bound. Fortnite is on UE5 and easily pushes 120 fps on consoles so it must have to do with hardware lumens and whatever traffic and NPC simulation they are using. The engine is capable of 60 fps but not when they are pushing CPU heavy tasks.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Had to chance to snag a 3080 for $900 last night and dropped the ball. Am so pissed at myself. This benchmark pretty much matches my 2080 so I couldve doubled my FPS.

EhGHxin.jpg



Whats interesting is that the RDNA cards are overperforming their tflops and outperforming the 20 series nvidia cards so this demo definitely favors AMD which makes sense since this particular demo was made for consoles. 5700xt matching 2080 instead of 2070 like it does in other benchmarks. 6700xt matching the 2080 Ti. And yet the 30 series seems to be where it should be when compared to the 6000 series cards. At least in standard rasterization. Clearly their ray tracing advantage is not showing up here which is very interesting for all next gen games using UE5 Lumens going forward.

This will also give the RDNA 2.0 cards a big boost because now their ray tracing and DLSS disadvantage is no longer a crutch. I can get a 6900xt for a $1000 right now instead of wasting an extra $300 on a 3080 Ti for 1 more FPS in an engine that will be used by a vast majority games next gen.
 
Last edited:

hlm666

Member
This will also give the RDNA 2.0 cards a big boost because now their ray tracing and DLSS disadvantage is no longer a crutch. I can get a 6900xt for a $1000 right now instead of wasting an extra $300 on a 3080 Ti for 1 more FPS in an engine that will be used by a majority of vast majority games next gen.
If ue5 performance is your main priority I would suggest not even buying a gpu until the first ue5 game comes out. We have no idea what an AMD or Nvidia optimised driver will improve or if they will add things to the UE code base.
 

SlimySnake

Flashless at the Golden Globes
If ue5 performance is your main priority I would suggest not even buying a gpu until the first ue5 game comes out. We have no idea what an AMD or Nvidia optimised driver will improve or if they will add things to the UE code base.
thats true. However, we all know the bots are going to snatch up those GPUs the moment they go on sale. Us plebs wont get them until 2024 at best.
 

hlm666

Member
thats true. However, we all know the bots are going to snatch up those GPUs the moment they go on sale. Us plebs wont get them until 2024 at best.
2024 is about the earliest I expect to see anything major on ue5 anyway. Going to take a while for teams to switch and get to grips with everything, modify things if they need to etc. By then rdna4 and hopper (is that still the one after ada?) will be looming.
 

kikkis

Member
And it looks like it’s much easier to use UE5 than creating your own engine with a nanite and lumen technology.
In away is quite odd that many Studios are going to ue5 instead of their own engines since nanite and lumen are common knowledge due to their presentations at siggraph etc.

I guess if game doesn't need something super specific, unreal is the way to go since there isnt much supply for engine programmers and there isn't much point to make engine that's just unreal alternative without all features and tooling.
 

vpance

Member
And it looks like it’s much easier to use UE5 than creating your own engine with a nanite and lumen technology.

With Lumen, I think other devs may have figured out something similar since a lot of progress in real time GI was already made last gen.

But as for nanite equivalents, it seems unlikely. I feel like we would've seen glimpses of that from others already if they existed. That ability to show "unlimited detail" is the real game changer.
 

Dampf

Member
Had to chance to snag a 3080 for $900 last night and dropped the ball. Am so pissed at myself. This benchmark pretty much matches my 2080 so I couldve doubled my FPS.

EhGHxin.jpg



Whats interesting is that the RDNA cards are overperforming their tflops and outperforming the 20 series nvidia cards so this demo definitely favors AMD which makes sense since this particular demo was made for consoles. 5700xt matching 2080 instead of 2070 like it does in other benchmarks. 6700xt matching the 2080 Ti. And yet the 30 series seems to be where it should be when compared to the 6000 series cards. At least in standard rasterization. Clearly their ray tracing advantage is not showing up here which is very interesting for all next gen games using UE5 Lumens going forward.

This will also give the RDNA 2.0 cards a big boost because now their ray tracing and DLSS disadvantage is no longer a crutch. I can get a 6900xt for a $1000 right now instead of wasting an extra $300 on a 3080 Ti for 1 more FPS in an engine that will be used by a vast majority games next gen.

Be careful when interpreting that data.

This demo is heavily CPU limited given its current state of optimization, and we all know AMD drivers are way optimized better at handling games in CPU limit. This city sample is not a reliable indication of how future games might perform at all.

Plus the sample enables Hardware Raytracing automatically on GPUs that support it (RDNA1 doesn't obviously) so a framerate comparison is a bit nonsensical as the visual quality between these cards is not the same. Unlike previous UE4 games, Hardware RT doesn't tank framerate anymore in UE5 but performs pretty similar. However, it can look drastically different in places.

software.00_00_07_30.fxj51.png
 
Had to chance to snag a 3080 for $900 last night and dropped the ball. Am so pissed at myself. This benchmark pretty much matches my 2080 so I couldve doubled my FPS.

EhGHxin.jpg



Whats interesting is that the RDNA cards are overperforming their tflops and outperforming the 20 series nvidia cards so this demo definitely favors AMD which makes sense since this particular demo was made for consoles. 5700xt matching 2080 instead of 2070 like it does in other benchmarks. 6700xt matching the 2080 Ti. And yet the 30 series seems to be where it should be when compared to the 6000 series cards. At least in standard rasterization. Clearly their ray tracing advantage is not showing up here which is very interesting for all next gen games using UE5 Lumens going forward.

This will also give the RDNA 2.0 cards a big boost because now their ray tracing and DLSS disadvantage is no longer a crutch. I can get a 6900xt for a $1000 right now instead of wasting an extra $300 on a 3080 Ti for 1 more FPS in an engine that will be used by a vast majority games next gen.
So it turns out no one had the hardware to run it at a locked 1440p 60fps after all?

Here comes the 30fps Unreal 5 games on consoles.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
So it turns out no one had the hardware to run it at a locked 1440p 60fps after all?

Here comes the 30fps Unreal 5 games on consoles.
Lumen is still not as optimized as it should be in reality.
In fact I would argue that the first UE5 games to truly wow us on PC will us RTGI and full on Raytraced reflections.
 

ToTTenTranz

Banned
Seems like UE5 loves Alder lake. Maybe all that extra memory bandwidth from DDR5 helps in this big city. Or that big jump in IPC.
It's also propping up Comet and Rocket Lake a lot. There's also not a lot of difference per clock between Zen1, Zen1+, Zen2 and Zen3, nor is it looking at more than 6 fast cores in AMD archs.

I think it's more of a compiler flag thing. Perhaps it was compiled on an Alder Lake and so the compiler didn't flag any AMD optimizations at all and it's running at compatibility mode on all AMD CPUs.
Same could be true on AMD and pre-Ampere GPUs, if it was compiled on a Nvidia Ampere card.
 
OMG. I never saw this. I am glad this guy used the same fast car to show off the PS5 vs XSX comparison because Alex the little shit used a fast car in his PS5 comparison and a slow car in the XSX benchmark. Clearly the framerate drops are directly tied to how fast you are going, how much you are crashing and this video shows that. People laughed at me when I pointed this out back then. Dumb dumb dumb.

I am surprised to see that the PS5 and XSX are running at 1620p with 1404p being the common resolution. DF made me think it was 1080p for whatever reason. Is it 1404p upscaled by TSR from a base resolution of 1080p or or is it native 1404p upscaled to 4k using TSR?



Reflections on cars were the first thing that jumped out at me. I remember being blown away by those reflections on PS5, but i figured it was just in my head.

My i7-11700k has a very high clock that can go up to 5.1 Ghz but mostly tops out at 4.8 ghz, but I still wasnt able to go over 40 fps consistently no matter how much i reduced the resolution. It is super weird.
Hahaha wait do you think Alex only used the 10 seconds shown for his benchmark or do you think it's more likely Audi who edited the video just picked 2 little clips to highlight what they were talking about?
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
So it turns out no one had the hardware to run it at a locked 1440p 60fps after all?

Here comes the 30fps Unreal 5 games on consoles.
Just to double down.
On consoles yeah its gonna be a hustle using lumen, we gotta rely on TSR to help these console last.
But on PC devs have options.
RTXGI and other implementations of GI within Unreal will produce results that beat Lumen.
Lumens reflections arent that good, weve already seen better reflections in realtime, its just about gauging whether its worth using or not.
This specific demo doesnt really give any real indication of what UE5 games will look like or run like.
Ive been messing with UE4 and UE5 seeing all the mixes of implementations to get the best results at the best performance, relying solely on Lumen is "kinda" a mistake, if you are developing on PC Unreal is easily extensible for really good results.
 
Top Bottom