• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Real-Time Neural Texture Upsampling in God of War Ragnarok on PS5

SlimySnake

Flashless at the Golden Globes
Machine learning, eh?

Another FUD from the Next-Gen OT scratched off the list.
IIRC DF asked and were told by SSM that they used machine learning in their upscaling technique for Performance modes. Kinda like DLSS. DF was very impressed with the results. I noticed a lot of Shimmering in the 60 fps modes that i dont see in DLSS games, but its interesting to see devs already experimenting with the machine learning support in the PS5 which it was supposedly lacking.
 

DeepEnigma

Gold Member
IIRC DF asked and were told by SSM that they used machine learning in their upscaling technique for Performance modes. Kinda like DLSS. DF was very impressed with the results. I noticed a lot of Shimmering in the 60 fps modes that i dont see in DLSS games, but its interesting to see devs already experimenting with the machine learning support in the PS5 which it was supposedly lacking.
It was never lacking, Sony just doesn't market too many buzzwords 99% of the gaming populace doesn't understand.

Love it or hate it. We always have to wait for their developer deep dives and GDCs outside of main machine changes and selling points (I/O SSD, etc).

They never get too deep beyond that.
 
Last edited:

ToTTenTranz

Banned
Judging by the video in your link nvidia are upscaling from lower res assets (so smaller file sizes) and doing it in 1.x ms vs 9.x ms with maybe better end results texture detail wise.
It's the same goal using a similar method. Nvidia is simply doing side-by-side comparisons using low-quality texture output at ISO file size as comparison because it looks better as marketing material.

Santa Monica's presentation is using same-quality output for lower file size which is the usual method for benchmarking compression solutions.

As for the times you mentioned, they're for different things. On Nvidia's example it's adding 1ms to the total frametime. On Santa Monica's solution it's taking 9ms of one ALU to upscale a texture from 2K to 4K. They're not saying how much this contributes to frametime because it obviously runs in parallel with a bunch of other things.
 
IIRC DF asked and were told by SSM that they used machine learning in their upscaling technique for Performance modes. Kinda like DLSS. DF was very impressed with the results. I noticed a lot of Shimmering in the 60 fps modes that i dont see in DLSS games, but its interesting to see devs already experimenting with the machine learning support in the PS5 which it was supposedly lacking.
It was NXGamer who asked one of the dev in an interview and then talked about that in his video. AFAIK DF never talked about that in any of their content about the game.
 

hlm666

Member
As for the times you mentioned, they're for different things. On Nvidia's example it's adding 1ms to the total frametime. On Santa Monica's solution it's taking 9ms of one ALU to upscale a texture from 2K to 4K. They're not saying how much this contributes to frametime because it obviously runs in parallel with a bunch of other things.
If the tensor cores can indeed run concurrently they should be able to be deployed on this type of technique without taking a gpu hit. I'm guessing seeing the metrics are different as you mentioned nvidia are processing all textures for the frame where this method seems to use the unprocessed textures until they are filtered in the background?
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Isnt the point to use better quality textures without them costing disk space?

If we arent seeing better textures then whats the point? Just use the same textures, they wouldve taken the same amount of space.

Not really.
 

SmokedMeat

Gamer™
So Nvidia's Revolutionary Neural Textures presented for the first time two weeks ago in tech demos that were going to change the world... had actually been preceded by a software implementation from Santa Monica and put into a shipping title half a year before, running on RDNA2 hardware without the need for dedicated tensor units.

You can see a visual difference in Nvidia’s Neural Textures.

You’re lying to yourself if you think you can see an obvious difference in the Sony shots.
 

ToTTenTranz

Banned
You’re lying to yourself if you think you can see an obvious difference in the Sony shots.

Sure, I'm the one lying to myself.

In page 25:

ZC5jjx2.png
 

ChiefDada

Gold Member
Always great to see new tech being utilized for optimization.

It was NXGamer who asked one of the dev in an interview and then talked about that in his video. AFAIK DF never talked about that in any of their content about the game.

Lol they actually did talk about it albeit indirectly during a DF Direct episode. John and Alex were discussing their analysis and they found it interesting that the performance mode shots often times looked better than the native 4k mode. "Huh, that's interesting" with no care to follow up with the developer to learn what's going on. You know, what one would typically expect from a tech channel at their level. Typical DF. Smh.
 

Neilg

Member
For everyone shitting on this, it's a pretty early first step, but it's got real results. The long term goal of using less disk space and having more available memory is to fill it back up with cool shit.

I'm really excited for Nvidia neural VDB -


The only thing holding games back from relying more on pre-cached high-detail volumetric assets/simulations is the amount of disk space they take up.
 
Last edited:
Top Bottom