• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

"We're able to show the difference from what's possible today with what's possible tomorrow": Inside Epic's Unreal Engine 5

Heisenberg007

Gold Journalism
This is all really fascinating, and I’m looking forward to seeing if it can bear similar fruit to the demo, which was utterly, unquestionably, next-gen.

My only concern is the storage requirements. Which are somewhat mentioned and addressed here, but I recall them throwing out a size number that was absolutely massive, just for that short demo.

That’s part and parcel (Sadiq will not ruin this saying) with all new paradigms, though. There generally always is a way to figure it out, and it looks like they’re on their way. I’m super excited to play games that look this good.

That's a very valid concern, which is why they also addressed this at the end of the article. And I believe that's why PS5's design decisions regarding insanely powerful decompressor units, plus the use of Kraken + Oodle, will come in handy.

This post (+ comments by developers at the end) paint a beautiful picture of the future of gaming.
 

farmerboy

Member
boom smile GIF


Can't wait to see the results and how well it scales between all machines.

With Sony's investment in Epic and how well Unreal 5 and Nanite work with the PS5, I wonder if 1st party will continue using their own bespoke engines or move to Unreal5?
 

Shmunter

Member
Of course it will be better in a high spec PC. What I was saying was that, given that most PCs will have only a fraction of the I/O bandwidth of the PS5 and that they will want to make sure UE5 works fine across a wide range of devices, they'll need to do some compromises regarding what they could achieve if all devices had the same bandwidth as the PS5.


You are delusional if you think that they will make an engine that only works with devices which have a similar I/O than the PS5. Most of their market is not there.
The engine will be on a spectrum, similarly to your gender
 

Shmunter

Member
Of course it will be better in a high spec PC. What I was saying was that, given that most PCs will have only a fraction of the I/O bandwidth of the PS5 and that they will want to make sure UE5 works fine across a wide range of devices, they'll need to do some compromises regarding what they could achieve if all devices had the same bandwidth as the PS5.


You are delusional if you think that they will make an engine that only works with devices which have a similar I/O than the PS5. Most of their market is not there.
Edit:double up...
 
Last edited:
Cerny and Sony are at the cornerstone of innovation.

Somehow whenever you hear UE5 you will also find jealous Xbox and Pc gamers, why you ask? Who knows. There is no need to be upset, I thought Cyberpunk 2077 at ultra psycho settings looked better than the UE5 demo?
 
Last edited:

cireza

Member
We are able to show what is possible : a marginal visual update requiring a huge hardware bump. Such a win for everybody !

And if every company could solder its SSD directly on PCB, it would be even more awesome. Who wants to repair stuff anyway ?
 

geordiemp

Member
Exactly, and in that spectrum the PS5 won't be the device with the highest specs. Now take your own conclusions

Ps5 will have the fastest SSD IO latency at the moment, and rendering small trinagles Cerny himself said thats why he preferred high clocks, and 2.23 Ghz is not too shabby but not as high as RDNA2 PC parts.

Ps5 will come back down to earth with Lumen as it seems most of the frame time was spent on lighting in land of nanite.
 

Hudo

Member
If the tooling, the editor and the integration with compilers and IDE's remains the same, they can shove their shit up their asses.
 

sinnergy

Member
We are able to show what is possible : a marginal visual update requiring a huge hardware bump. Such a win for everybody !

And if every company could solder its SSD directly on PCB, it would be even more awesome. Who wants to repair stuff anyway ?
Ms SSD is not soldered !😎
 

onesvenus

Member
Ps5 will have the fastest SSD IO latency at the moment, and rendering small trinagles Cerny himself said thats why he preferred high clocks, and 2.23 Ghz is not too shabby but not as high as RDNA2 PC parts.

Ps5 will come back down to earth with Lumen as it seems most of the frame time was spent on lighting in land of nanite.
Are you implying UE5 will run better on PS5 than on a high spec PC? Because that was the initial claim.

What I'm saying is that having to run on a variety of devices that don't support such as a high I/O as the PS5, Epic won't develop something that really explotes PS5 capabilities if that means leaving other platforms behind. What they will do instead is compromise on what they would be able to do regarding I/O if they only developed UE5 for the PS5. And with that in mind, PS5 won't be the best device where to run UE5
 
Last edited:

geordiemp

Member
Are you implying UE5 will run better on PS5 than on a high spec PC? Because that was the initial claim.

What I'm saying is that having to run on a variety of devices that don't support such as a high I/O as the PS5, Epic won't develop something that really explotes PS5 capabilities if that means leaving other platforms behind. What they will do instead is compromise on what they would be able to do regarding I/O if they only developed UE5 for the PS5. And with that in mind, PS5 won't be the best device where to run UE5

I believe Ps5 will run Nantite with advantages, but Nanite at 1440p was only something like < 8 ms of the 32 ms frame time and Ps5 IO and small triangle focus absolutely crushed the rendering part (same cost time as Fortnite which is nuts and into 120 FPS territory), the lighting part Lumen slowed everything down and hence we only got 30 FPS in the demo. I understand Epic were working on getting to 60 FPS which is more likely focussed on the frame time cost of the lighting part.

High end PCs will most likely run Lumen better than ps5 so....as I said, swings and roundabouts.

I was hoping the article would expand on how LUIMEN was coming along optimisation wise and any info on how Nanite renders things like grass, trees etc. But it seemed mainly a PR thing so...
 
Last edited:

noise36

Member
Blah blah blah more epic marketing and sales speak. The big change apart from increased flops is ssd standard , everything else is marginal.
 
Last edited:
ITT a bunch of people giving their opinions,. Yawn. I'll wait for the games.

Noooooo... a bunch of people giving their opinions on a web forum?!? How can they keep doing this? Won't someone think of the children!!!!!!! /s

What is a web forum for otherwise. What a bizarre post.

We are able to show what is possible : a marginal visual update requiring a huge hardware bump. Such a win for everybody !

And if every company could solder its SSD directly on PCB, it would be even more awesome. Who wants to repair stuff anyway ?
50 cent laughing GIF

I believe Ps5 will run Nantite with advantages, but Nanite at 1440p was only something like < 8 ms of the 32 ms frame time and Ps5 IO and small triangle focus absolutely crushed the rendering part (same cost time as Fortnite which is nuts and into 120 FPS territory), the lighting part Lumen slowed everything down and hence we only got 30 FPS in the demo. I understand Epic were working on getting to 60 FPS which is more likely focussed on the frame time cost of the lighting part.

High end PCs will most likely run Lumen better than ps5 so....as I said, swings and roundabouts.

I was hoping the article would expand on how LUIMEN was coming along optimisation wise and any info on how Nanite renders things like grass, trees etc. But it seemed mainly a PR thing so...
Given LUMENs is a fully dynamic GI system it was always expected to be expensive.

On the other hand, devs can still choose to use their own static pre-baked lighting system for games targeting higher framerates. And the on-board decompression, higher theoughput and lower latency of console SSDs will benefit said system even more so, since more precomputed lighting data can be utilised on a scene by scene basis, further reducing pressure on execution respurces to compute diffuse lighting effects in realtime.
 

yurinka

Member
How much would you like to bet PC will run it at higher IQ and framerate than ps5. The ps5 version will be the one with compromises in comparison.
I assume some day in the future it will be possible, once new high end PCs match the I/O capabilities and related dedicated hardware of the PS5 or enough extra horsepower to compensate it. With the technology available today PCs can't do it, but will be able to run this UE5 demo and features.
Im all for better graphics and better LODs but they really need to make a game that shows off those features and please not fortnite.
With engines optimized for next gen like UE5 they won't need to have LODs: devs will simply load the current highest quality version of each character or environment object. And then the engine will use the new dedicated hardware to reduce its quality to the one visible in whatever resolution they are rendering.

Resolution and distance from the camera will define the quality of the object, which will be calculated on real time by that new hardware (Geometry Engine in the case of PS5) instead of relying on a few predefined LODs versions of the object. This whole concept of insanely fast streaming is a big paradigm shift in terms of development, it will need a big engine change. Not sure if we'll see it before in UE5 (so it may probably debut with Fortnite, yes) or in a Sony engine, because obviously Sony must have been playing with it at the same time Epic did it. We may see it first on Horizon 2 (maybe not because it's crossgen) or God of War Raknarok, but who knows.
 
Last edited:

Lister

Banned
Direct storage will be enough for 99% of what the >99% of next gen games will need to accomplish on PC.

PC tech like RTX IO will more than cover the <1% of games that will require more.
 
Last edited:
Can't wait to hear the excuses when UE5 releases on PC this year, and we're running it in native 4K over 60fps with higher image quality and effects. There's will be more threads than cyberpunk meltdown.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Can't wait to hear the excuses when UE5 releases on PC this year, and we're running it in native 4K over 60fps with higher image quality and effects. There's will be more threads than cyberpunk meltdown.
The PCMR argument is always hanging roba quite disingenuous “here is a PC you can build or buy that does something better than your console” as if it meant something of incredible value/killing blow to this argument: by this time next year there will be about 35-40 Million console users (XSX|S + PS5) with HW and SW stacks built around the heavy throughout and low latency data streaming concept this engine was built upon. How many PC’s will be there on the market with the same capabilities? Thought so...

A PC with a 16-20 GB VRAM GPU that is going to use a few GB as RAMDISK or brute force this in other ways? Would not be shocked.
 

Lister

Banned
Can't wait to hear the excuses when UE5 releases on PC this year, and we're running it in native 4K over 60fps with higher image quality and effects. There's will be more threads than cyberpunk meltdown.

It'll be the 750ti all over again.

Remember that shit? Holy cow, this super cheap entry level GPU comes in wrecking the PS4 in performance and graphics settings and literally people were getting banned on this very forum for even mentioning this. Like that's the level of salt/tears.

And it was interesting because on paper the 750ti was definitely not up to par with the PS4, it was showing that the "supa charged, to the metal coding!!" was bullshit, as most PC gamers had been saying all along. Everyone knew it was a temporary thing because of it's memory buffer. Eventually games would start taking advantage of that extra VRAM on the PS4 and the 750ti would not keep up, but man, were Sony fanboys pissed.

We even had screen shots in threads of games running on the 750ti at obviously better graphics and resolution settings than the PS4, not to mention performance, but you had people coming in saying they couldn't see a difference. Lol. I think Dying light wa sone of those games, IIRC.
 
Last edited:
I think you underestimate how demanding AI and code for combat is.... You realize that even games on ps2 can have good enemy AI right?
You aren't going to lose 10% of your framerate by adding a few enemies.
I recall the AI in Half-Life 1/2 and the first Halo being more interesting than what we have in many modern games.... These 1 to 2ghz CPUs with barely any RAM at all surely were powerful!
 

Lister

Banned
The PCMR argument is always hanging roba quite disingenuous “here is a PC you can build or buy that does something better than your console” as if it meant something of incredible value/killing blow to this argument: by this time next year there will be about 35-40 Million console users (XSX|S + PS5) with HW and SW stacks built around the heavy throughout and low latency data streaming concept this engine was built upon. How many PC’s will be there on the market with the same capabilities? Thought so...

A PC with a 16-20 GB VRAM GPU that is going to use a few GB as RAMDISK or brute force this in other ways? Would not be shocked.

There's no need for that with direct store and RTX I/O As I pointed out. I'd also bet SSS's are super common in gaming PC's today, neverminded in a couple of years when these technologies and games that use them will relaly start to come out..
 
Last edited:

Elios83

Member
This is a really important partnership for Sony, once the cross gen phase is over tons of games will use this engine releasing from 2022 onwards.
So the fact that the engine is being so optimized for PS5 will yield a lot of benefits for next gen consoles users.
 

rodrigolfp

Haptic Gamepads 4 Life
If by 'PC' you mean 1% of PCs in the world that cost 4x-6x the price of the console and made with future I/O technologies, then ... probably. It'd be embarrassing if it doesn't and still cost 4x-6x.

But if you mean more than 51% of PCs in the world that only cost $399-$499 and is based on <2020 tech, then I'm open to betting.
805579.jpg
 

Lister

Banned
I recall the AI in Half-Life 1/2 and the first Halo being more interesting than what we have in many modern games.... These 1 to 2ghz CPUs with barely any RAM at all surely were powerful!

I'm more interested in seeing if game devs cna leverage Nvidia's machine learnign hardware to improve on game AI. right now it's mostly used for DLSS, I'm hoping it gets leveraged for other things, like ray tracing denoising, and eventually game AI.
 

Panajev2001a

GAF's Pleasant Genius
There's no need for that with direct store and RTX I/O As I pointed out. I'd also bet SSd';s are super common in gaming PC's.
Let’s see how many of those SSD match the kind of bandwidth we are talking about and the optimisations to reduce latency ... and do not burn through most of your CPU cores to stream and process your data (why do you think MS and especially Sony spent so many transistors on HW acceleration engines for data streaming and decompression?).

Again, tech is coming to PC’s to allow some PC configurations to match and outdo what consoles are able to do now if coded appropriately. By the time UE5 is out there will be tens of millions of consumers with capable HW and battle tested SDK’s to take advantage of it on consoles... how many PC’s will be in that position? PC is not a single HW spec.
 
It'll be the 750ti all over again.

Remember that shit? Holy cow, this super cheap entry level GPU comes in wrecking the PS4 in performance and graphics settings and literally people were getting banned on this very forum for even mentioning this. Like that's the level of salt/tears.

And it was interesting because on paper the 750ti was definitely not up to par with the PS4, it was showing that the "supa charged, to the metal coding!!" was bullshit, as most PC gamers had been saying all along. Everyone knew it was a temporary thing because of it's memory buffer. Eventually games would start taking advantage of that extra VRAM on the PS4 and the 750ti would not keep up, but man, were Sony fanboys pissed.

We even had screen shots in threads of games running on the 750ti at obviously better graphics and resolution settings than the PS4, not to mention performance, but you had people coming in saying they couldn't see a difference. Lol. I think Dying light wa sone opsf those games, IIRC.
Yup, it has been a thing for a long long time, the PS1, N64, Dreamcast, PS2, OG xbox, Xbox 360, PS4, xbox series x all had amazing price/performance ratios compared to gaming PCs at release... The same is true now for the PS5 and Series X (not so sure about the S). But even then, like always, because of memory configurations and available CPU power the PC will always have some kind of benefit.

On the other hand I don't see low end PCs having a cheap 8core/16 threads CPU + 16GB of RAM early next year, with a 4050ti that has 8GB or GDDR mounted on it (maybe 12 would be needed to match the PS5) that is about as powerful as a 2080 and cost less than 200$? I wish, but I don't see this happening. add a MB/PSU/chassis to the mix.

I would be surprised if it worked even with used parts.
 

Lister

Banned
Let’s see how many of those SSD match the kind of bandwidth we are talking about and the optimisations to reduce latency ... and do not burn through most of your CPU cores to stream and process your data (why do you think MS and especially Sony spent so many transistors on HW acceleration engines for data streaming and decompression?).

What part of direct storage and RTX I/O do you not understand? Or I guess you just don't know about the tech?

The main bottlenecks in I/O is the CPU overhead of handling tons of large requests, decompression of assets, and some performance left on the floor due to not having nitty gritty access to the pipeline.

Direct storage massively reduces the main issue of making efficient requests while significantly reducing CPU time and giving devs lower level access to the I/O pipeline. RTX I/O handles the compression part, allowing data to go directly to the GPU in it's compressed format, massively lowering bandwidth requirements.
 
Last edited:

Lister

Banned
Again, tech is coming to PC’s to allow some PC configurations to match and outdo what consoles are able to do now if coded appropriately. By the time UE5 is out there will be tens of millions of consumers with capable HW and battle tested SDK’s to take advantage of it on consoles... how many PC’s will be in that position? PC is not a single HW spec.

A lot? I mean, it won't be the same numbers as BOTH consoles together, but it will probably be a match for either one. It's not like third party publishers or Micrsoft are goign to suddenly ignore the behemoth that is PC gaming, just because it; all by it's lonesome isn't bigger than two other platforms combined.

Right now, just talking about the GPU requirements to match or trounce the current new consoles, according to the Steam survey well over 15 million gamers can do this. That's today, not 2 years from now when games taking advantage of these technologies will actually start to come out.

This is no 2005 my man. This forum is sometimes stuck up it's own ass when ti comes to it's console bubble. PC gaming is GIGANTIC today. Even small percentages equal tens of millions of gamers. Pc gaming hasn't been some niche thing for a very long time.
 
Last edited:

Lister

Banned
Yup, it has been a thing for a long long time, the PS1, N64, Dreamcast, PS2, OG xbox, Xbox 360, PS4, xbox series x all had amazing price/performance ratios compared to gaming PCs at release... The same is true now for the PS5 and Series X (not so sure about the S). But even then, like always, because of memory configurations and available CPU power the PC will always have some kind of benefit.

On the other hand I don't see low end PCs having a cheap 8core/16 threads CPU + 16GB of RAM early next year, with a 4050ti that has 8GB or GDDR mounted on it (maybe 12 would be needed to match the PS5) that is about as powerful as a 2080 and cost less than 200$? I wish, but I don't see this happening. add a MB/PSU/chassis to the mix.

I would be surprised if it worked even with used parts.

4050ti is likely to meet those requirements, but yeah I don't see Nvidia going much below $300 when it comes out, but a 3060ti also beats a 2080, especially when ti comes to ray tracing, and is certainly faster than a PS5, and it might be $200 by then. It won't be good enough for 4K though because of it's VRAM, but 1080p to 144p should be fine.
 
On the other hand, devs can still choose to use their own static pre-baked lighting system for games targeting higher framerates. And the on-board decompression, higher theoughput and lower latency of console SSDs will benefit said system even more so, since more precomputed lighting data can be utilised on a scene by scene basis, further reducing pressure on execution respurces to compute diffuse lighting effects in realtime.

How big are lighting data usually?

I'm thinking big budget AAA games might start to come in about 200GB. Factoring in the decompression that could result in 3x compression, are we looking at 600GB of uncompressed data in next-gen games?
 

Panajev2001a

GAF's Pleasant Genius
What part of direct storage and RTX I/O do you not understand? Or I guess you just don't know about the tech?

The main bottlenecks in I/O is the CPU overhead of handling tons of large requests, decompression of assets, and some performance left on the floor due to not having nitty gritty access to the pipeline.

Direct storage massively reduces the main issue of making efficient requests while significantly reducing CPU time and giving devs lower level access to the I/O pipeline. RTX I/O handles the compression part, allowing data to go directly to the GPU in it's compressed format, massively lowering bandwidth requirements.
I know about those pieces of tech and neither is addressing the problem with dedicated HW but leaving the CPU and/or the GPU to pickup the tab and neither is available for PC now nor we know the latency each link in the chain will effectively add.

I seriously doubt PC games will be store uncompressed on the SSD so you will need a block to undo the zlib/kraken compression done by the games and this bit alone can take several cores worth of HW.
Both console have dedicated HW decoder blocks, XSX gets away with less dedicated I/O HW also because it targets about 1/2 of the bandwidth PS5’s SSD solution is optimised for.

The GPU can dedicated resources to uncompress the data and also dedicate VRAM to the uncompressed data. Sure, no discussion that you can build a PC handling the same scenario at some point in the future, but I do not agree the audience on PC gaming with those specs will be even comparable to the amount of consoles sold by that time (let alone the price of those parts).
 

Darius87

Member
we could get a rough approximation on how large assets are in quixel megascans is, i've seen more then 1GB for just one movie quality asset but of course it varies and UE5 demo reuses same assets also culling is involved so my guess for streaming would be at normal scenes around 3-5gb/s and for flying scene could be up to 8gb/s.
also this demo doesn't even touch GE in PS5 it runs on Async compute so standart CU's does geometry another thing that level of detail depends on IO throughput not that much on GPU like some think and pop-ins depends on latency between SSD -> RAM.
PS5 for now is most capable system at these spec.
 

Lister

Banned
I know about those pieces of tech and neither is addressing the problem with dedicated HW but leaving the CPU and/or the GPU to pickup the tab and neither is available for PC now nor we know the latency each link in the chain will effectively add.

I seriously doubt PC games will be store uncompressed on the SSD so you will need a block to undo the zlib/kraken compression done by the games and this bit alone can take several cores worth of HW.
Both console have dedicated HW decoder blocks, XSX gets away with less dedicated I/O HW also because it targets about 1/2 of the bandwidth PS5’s SSD solution is optimised for.

The GPU can dedicated resources to uncompress the data and also dedicate VRAM to the uncompressed data. Sure, no discussion that you can build a PC handling the same scenario at some point in the future, but I do not agree the audience on PC gaming with those specs will be even comparable to the amount of consoles sold by that time (let alone the price of those parts).

I don't think you do understand, because RTX I/O is leveraging the GPU for the decompression. In other words it IS dedicated hardware.

From Nvidia's tech page on RTX I/O:

Nvidia RTX IO said:
We’ve created NVIDIA RTX IO, a suite of technologies that enable rapid GPU-based loading and game asset decompression, accelerating I/O performance by up to 100x compared to hard drives and traditional storage APIs. When used with Microsoft’s new DirectStorage for Windows API, RTX IO offloads dozens of CPU cores’ worth of work to your GeForce RTX GPU, improving frame rates, enabling near-instantaneous game loading, and opening the door to a new era of large, incredibly detailed open world games.

Specifically, NVIDIA RTX IO brings GPU-based lossless decompression, allowing reads through DirectStorage to remain compressed while being delivered to the GPU for decompression. This removes the load from the CPU, moving the data from storage to the GPU in its more efficient, compressed form, and improving I/O performance by a factor of 2.

GeForce RTX GPUs are capable of decompression performance beyond the limits of even Gen4 SSDs, offloading dozens of CPU cores’ worth of work to deliver maximum overall system performance for next generation games.
 
Last edited:

Lister

Banned
we could get a rough approximation on how large assets are in quixel megascans is, i've seen more then 1GB for just one movie quality asset but of course it varies and UE5 demo reuses same assets also culling is involved so my guess for streaming would be at normal scenes around 3-5gb/s and for flying scene could be up to 8gb/s.
also this demo doesn't even touch GE in PS5 it runs on Async compute so standart CU's does geometry another thing that level of detail depends on IO throughput not that much on GPU like some think and pop-ins depends on latency between SSD -> RAM.
PS5 for now is most capable system at these spec.

These things will never get used on actual games though. Assets start out like that, and perhaps they can remain that way while running on development PC's but they will need to be optimized for a production build. Even if the IO and the game engine could handle a full game with those types of assets (I doubt it, this was a a techd emo after all) Will Sony ask users to buy another TB drive so they can install a single game into it?
 

Thirty7ven

Banned
Unreal is the biggest game engine in the world, used by third and first party alike, most used by Xbox studios, the engine that powers the “graphical showcase” Gears of War. The nanite tech demo is the most impressive real time interactive showcase of next gen this far, Unreal 5 also powers Hellblade 2, which is the most impressive target render thus far.

But when Epic says that they worked with Sony really early on and how the PS5 is thus far the ultimate expression of the major tech advancements that Epic is banking on with Unreal 5 and the future of game development, how do fanboys react? Predictably so.

I can’t wait for the results and the tears.
 

Herr Edgy

Member
I was hoping the article would expand on how LUIMEN was coming along optimisation wise and any info on how Nanite renders things like grass, trees etc. But it seemed mainly a PR thing so...
Nanite wasn't working yet with skinned meshes (i.e. characters) or translucent materials (i.e. realistic grass and leaves). I'm also curious, especially seeing as UE5 should become available to devs as preview in some months.
 

Darius87

Member
These things will never get used on actual games though. Assets start out like that, and perhaps they can remain that way while running on development PC's but they will need to be optimized for a production build. Even if the IO and the game engine could handle a full game with those types of assets (I doubt it, this was a a techd emo after all) Will Sony ask users to buy another TB drive so they can install a single game into it?
i'm talking about assets when they end up in RAM after decompression, game size on disk will be lower because it will be compressed. they said there's no LOD needed so that saves up space even more.
 

mortal

Gold Member
Hey Epic, I'm still waiting for those UE4 demo's you showed off years ago to come to reality.
UE4 engine has shown its capabilities in more ways than games.

People seem to forget that animators and 3D artists make use of UE for other things besides elements for game development.
 
How big are lighting data usually?

I'm thinking big budget AAA games might start to come in about 200GB. Factoring in the decompression that could result in 3x compression, are we looking at 600GB of uncompressed data in next-gen games?
Depends entirely on the approach a developer will chose to take with game lighting.

Given even what we’ve seen so far with early next-gen games like Demons Souls, I don’t think AAA games compressed on the disk are gonna come close to 200GB. It’s gonna be much less. Folks are gonna be very pleasantly surprised.
 
Top Bottom