• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: The Touryst PS5 - The First 8K 60fps Console Game

Lysandros

Member
On Series X the average available bandwidth will be decreased when you use the slowest memory. Some believe the average will still be higher than PS5, but not that much anymore (I have seen some say slower by max ~40GB/s so making the 560GB/s only 520GB/s). It is because when the slowest memory is used, the fast memory can't be used simultaneously. It's physically impossible.
To be fair contention is a problem for every system using UMA including PS5 just like PS4 before, we must take that into account. Even with a 520 GB/s figure, bandwidth per TF would be only a few percent higher for PS5. I really don't think that the developer would mention the memory setup if the difference in bandwidth wasn't meaningful in favor of PS5. I really don't see a meaningful real world bandwidth advantage to be less than ~10-15% considering the large resolution difference. So what's causing it then? Maybe XSX CPU consumes more bandwidth, that was the case for last gen if i remember correctly, PS4 had a capped bandwidth of 20 GB/s for the CPU whereas it was around 30 GB/s for XboxOne. This can worsen contention on XSX side. PS5 may also use lower latency GDDR6 chips improving real world bandwidth. I think with its deficit in cache bandwidth XSX would need more RAM bandwidth to begin with especially with 16 more CUs.
 

azertydu91

Hard to Kill
I didn't say he was using all of them, just that he himself said that he rewrote the engine for the PS5 specifically to take advantage of the hardware.
So do you believe it only when it concur with your narrative but when he says ps5 has the advantage for Ram setup and higher then his words don't matter?
And the maybe you need to understand that of course he rewrote the game to take advantage of the ps5 it's called porting a game to another console, spoiler it would happen the other way too if the game had come out on ps5 first, but that doesn't mean that the result would've been different.
 

azertydu91

Hard to Kill
To be fair contention is a problem for every system using UMA including PS5 just like PS4 before, we must take that into account. Even with a 520 GB/s figure, bandwidth per TF would be only a few percent higher for PS5. I really don't think that the developer would mention the memory setup if the difference in bandwidth wasn't meaningful in favor of PS5. I really don't see a meaningful real world bandwidth advantage to be less than ~10-15% considering the large resolution difference. So what's causing it then? Maybe XSX CPU consumes more bandwidth, that was the case for last gen if i remember correctly, PS4 had a capped bandwidth of 20 GB/s for the CPU whereas it was around 30 GB/s for XboxOne. This can worsen contention on XSX side. PS5 may also use lower latency GDDR6 chips improving real world bandwidth. I think with its deficit in cache bandwidth XSX would need more RAM bandwidth to begin with especially with 16 more CUs.
Maybe the CPU on the xsx needs more power to decompress the SSD, because they've reduced the power needed which is great but it still uses the CPU if I remember correctly.
 

Lognor

Banned
I had no idea so many ppl were interested in this game. It was full beyond belief. Or am I missing something? Is it more likely that the people hyping the ps5 version up have never played it?
 
Last edited:

arvfab

Banned
I had no idea so many ppl were interested in this game. It was full beyond belief. Or am I missing something? Is it more likely that the people hyping the ps5 version up have never played it?

It's a great, light-hearted game. Played it on release on the Switch. Would recommend it.
 
Last edited:

Heisenberg007

Gold Journalism
Oh so it's using Velocity Architecture? SFS? All the other DX12U features and all the xbox specific features? No, it's not.

I can tell none of you are developers lol. Saying that an old game has been ported using the new SDK does not in any way mean that it's now a completely built for that new console game.

What you're essentially saying is that God of War and GT7 are actually full on native next gen PS5 games taking full advantage of the PS5s power, since they're made on the PS5 SDK.
So is the PS5 version using HW decompressors, Geometry engine, and all PS5 specific features? You didn't develop this game either, so instead of baseless speculation, why not just believe what the dev is telling you?

Which brings me back to my question that you ignored: do you believe the developer when he says that PS5's memory setup and higher clock speeds helped him achieve 8K/60?
 
I had no idea so many ppl were interested in this game. It was full beyond belief. Or am I missing something? Is it more likely that the people hyping the ps5 version up have never played it?

I think it's because it's interesting how the developer talks about how they took advantage of the PS5s hardware. And then also compare it to the XSXs which is interesting to see.

It seems like it a good game though.

So is the PS5 version using HW decompressors, Geometry engine, and all PS5 specific features? You didn't develop this game either, so instead of baseless speculation, why not just believe what the dev is telling you?

Which brings me back to my question that you ignored: do you believe the developer when he says that PS5's memory setup and higher clock speeds helped him achieve 8K/60?

Is he really calling the developers liars though without any proof?
 
Last edited:

MrFunSocks

Banned
So is the PS5 version using HW decompressors, Geometry engine, and all PS5 specific features? You didn't develop this game either, so instead of baseless speculation, why not just believe what the dev is telling you?

Which brings me back to my question that you ignored: do you believe the developer when he says that PS5's memory setup and higher clock speeds helped him achieve 8K/60?
Read my previous post, #551.

For your question - sure, why wouldn't I? That doesn't mean that this little indie game with some of the most basic graphics you'll ever see (though I do love the art style) couldn't do 8K on a 12TF machine if he tried.

The dev didn't say its not possible on xbox like you seem to imply.
 
Last edited:

Lognor

Banned
I think it's because it's interesting how the developer talks about how they took advantage of the PS5s hardware. And then also compare it to the XSXs which is interesting to see.

It seems like it a good game though.



Is he really calling the developers liars though without any proof?
Have you played it? It is very boring. I made it to the third island and realized there wasn't any more to the game and gave up. Glad I played it for free on game pass because it's but worth the money imo
 

azertydu91

Hard to Kill
Read my previous post, #551.

For your question - sure, why wouldn't I? That doesn't mean that this little indie game with some of the most basic graphics you'll ever see (though I do love the art style) couldn't do 8K on a 12TF machine if he tried.

The dev didn't say its not possible on xbox like you seem to imply.
Guys....Guys.... I found someone that still uses lazy devs in 2021...Rumor has it it is as rare a seeing a real unicorn nowadays.
 
  • Thoughtful
Reactions: Rea

Lognor

Banned
But you're also not the only one that's played it. I've seen others who have say that they enjoy it. Also the meta/Opencritic doesn't seem bad for the game.
Just giving my opinion. I think there is more at play here. A game no one was talking about a week ago and now we're over 500 posts in this thread. For someone that hasn't played it you sure are defending it quite a bit. When do you plan on picking it up?
 

MrFunSocks

Banned
Just giving my opinion. I think there is more at play here. A game no one was talking about a week ago and now we're over 500 posts in this thread. For someone that hasn't played it you sure are defending it quite a bit. When do you plan on picking it up?
Yeh it’s an old game but it’s getting a PS5 release so it’s a lot of peoples first exposure to it. This interview/comment has just stirred the console war hornets nest with people reading far too much into it.

Like you I played it because of game pass. I think I got to about the 3rd or 4th island - think I just finished the one with the beach disco. I did love the art style and graphics, but it’s a very, very simple little game that gets very repetitive very quickly. Think I played 2 sessions of it then never went back and never intend to.
 
Last edited:

ethomaz

Banned
To be fair contention is a problem for every system using UMA including PS5 just like PS4 before, we must take that into account. Even with a 520 GB/s figure, bandwidth per TF would be only a few percent higher for PS5. I really don't think that the developer would mention the memory setup if the difference in bandwidth wasn't meaningful in favor of PS5. I really don't see a meaningful real world bandwidth advantage to be less than ~10-15% considering the large resolution difference. So what's causing it then? Maybe XSX CPU consumes more bandwidth, that was the case for last gen if i remember correctly, PS4 had a capped bandwidth of 20 GB/s for the CPU whereas it was around 30 GB/s for XboxOne. This can worsen contention on XSX side. PS5 may also use lower latency GDDR6 chips improving real world bandwidth. I think with its deficit in cache bandwidth XSX would need more RAM bandwidth to begin with especially with 16 more CUs.
His ideia of average is flawed but I didn’t bother to reply at all.

There is two parts of the memory to be accessed: 10GB at high speed and 6GB at slow speed.

If you game reach the point to put GPU data outside these 10GB the render will need to wait the time the slow part have to delivery the data… so it won’t happen the amount average but the GPU waiting the slowest part when needed.

So average 520GB/s is not accurate… if the data is in 10GB it will be 560GB/s but if there is data that GPU needs in 360GB/s part the render will need to wait that data.

How that wait will affect the render time depends of the engine and how critical the data is required in render time.

IMO it probably should be some not important data in lower resolution (like audio) but in 8k the data will probably be all critical for render time.

If I have to guess why he did that comment about memory setup is because at 8k his engine take more than 10GB of critical data and when it reaches the slow part of the memory it won’t reach in enough time to render hold 30fps… so the render will have to wait a bit more time and pass the time so delivery a frame in that fraction of second making the overall framerate below 30 for the overall second… in simple terms generating performance issues that are not fixable unless he rewrite the whole engine to use less critical data.

So dropping to 6k he found out that the critical data of his engine doesn’t break these 10GB of slower memory.

But remember that is very specific situation from engine to engine… even so the changes to 8k require way more more RAM than 4k is a safe bet.
 
Last edited:

MrFunSocks

Banned
Just for reference - the touryst on series X is like 500MB install size. Each level is a teeny tiny little thing. People talking about RAM speeds and amounts being a problem for series X meaning it can’t hit 8K could literally not have chosen a worse example of a game.
 

Zathalus

Member
Oh and thing with the Xbox - high level, PS5 low level API .. iam afraid I have to claim that one too - if that's actually true then the difference would be bigger than ever between low and high level APIs .. Especially between Playstation and Xbox APIs
If it however IS true that usage of the two APIs brings a difference of 14Million Pixels then future Comparisions (after cross gen phase) between Sony and MS Exclusives are already decided...
So Team Xbox - choose you poison..
What should it be :
Horrendous lack of API Efficiency
or
Horrendous lack of architectural Efficiency?
My position is that a single game is a piss poor representation of anything.

Some games that run better on Series X:

Control
RE: Village
Tales of Arise
Subnautica: Below Zero
Metro Exodus
Outriders
Watch Dogs: Legion
Hitman 3
Marvel's Avengers
Doom Eternal

Some games that run better on PS5:

The Touryst
Assassins Creed: Valhalla
Little Nightmares: II
Call of Duty Black Ops: Cold War
Dirt 5
Scarlet Nexus

Then there are a ton of games that are basically identical on both (most having less than .05 FPS difference average between them).

Thus, I think it should be pretty clear that the consoles are very close to each other, each with respective weaknesses and strengths. Some game engines will work better on the one, while others will work better on the other. I'd wager this tit for tat will go on for most of the generation with the XSX taking a slight lead in most titles on average, especially once game engines are optimised to take advantage of features the XSX has that the PS5 does not. Even then, the difference will be way smaller then the XONE/PS4 or XOX/Pro difference.
 
His ideia of average is flawed but I didn’t bother to reply at all.

There is two parts of the memory to be accessed: 10GB at high speed and 6GB at slow speed.

If you game reach the point to put GPU data outside these 10GB the render will need to wait the time the slow part have to delivery the data… so it won’t happen the amount average but the GPU waiting the slowest part when needed.

So average 520GB/s is not accurate… if the data is in 10GB it will be 560GB/s but if there is data that GPU needs in 360GB/s part the render will need to wait that data.

How that wait will affect the render time depends of the engine and how critical the data is required in render time.

IMO it probably should be some not important data in lower resolution (like audio) but in 8k the data will probably be all critical for render time.

If I have to guess why he did that comment about memory setup is because at 8k his engine take more than 10GB of critical data and when it reaches the slow part of the memory it won’t reach in enough time to render hold 30fps… so the render will have to wait a bit more time and pass the time so delivery a frame in that fraction of second making the overall framerate below 30 for the overall second… in simple terms generating performance issues that are not fixable unless he rewrite the whole engine to use less critical data.

So dropping to 6k he found out that the critical data of his engine doesn’t break these 10GB of slower memory.

But remember that is very specific situation from engine to engine… even so the changes to 8k require way more more RAM than 4k is a safe bet.
The average 520GB/s is accurate. I don't think you fully understand the memory setup used on XSX. Using a theoretical example with a bandwidth average during one whole frame with heavy use of CPU:

If you use both CPU (heavy use) and GPU on all available memory (GPU 10GB + CPU 3GB, rest is OS) during 16ms, you do an average of available possible max bandwidth that you can use then you'll get ~520GB/s instead of 560GB/s on XSX while you'll still get 448GB/s on PS5.

That's of course ignoring the usual memory contention problems that will further impact both machines in the same way.
 
Last edited:

Lysandros

Member
His ideia of average is flawed but I didn’t bother to reply at all.

There is two parts of the memory to be accessed: 10GB at high speed and 6GB at slow speed.

If you game reach the point to put GPU data outside these 10GB the render will need to wait the time the slow part have to delivery the data… so it won’t happen the amount average but the GPU waiting the slowest part when needed.

So average 520GB/s is not accurate… if the data is in 10GB it will be 560GB/s but if there is data that GPU needs in 360GB/s part the render will need to wait that data.

How that wait will affect the render time depends of the engine and how critical the data is required in render time.

IMO it probably should be some not important data in lower resolution (like audio) but in 8k the data will probably be all critical for render time.

If I have to guess why he did that comment about memory setup is because at 8k his engine take more than 10GB of critical data and when it reaches the slow part of the memory it won’t reach in enough time to render hold 30fps… so the render will have to wait a bit more time and pass the time so delivery a frame in that fraction of second making the overall framerate below 30 for the overall second… in simple terms generating performance issues that are not fixable unless he rewrite the whole engine to use less critical data.

So dropping to 6k he found out that the critical data of his engine doesn’t break these 10GB of slower memory.

But remember that is very specific situation from engine to engine… even so the changes to 8k require way more more RAM than 4k is a safe bet.
Thanks for the reply. So your main point is that this game's VRAM requirement at 8K exeeds 10 GB and thus data requests from the slower pool of 336 GB/s results in real world bandwidth significantly lower than PS5 (even if) momentarily right?
 

ethomaz

Banned
The average 520GB/s is accurate. I don't think you fully understand the memory setup used on XSX. Using a theoretical example with a bandwidth average during one whole frame with heavy use of CPU:

If you use both CPU (heavy use) and GPU on all available memory (GPU 10GB + CPU 3GB, rest is OS) during 16ms, you do an average of available possible max bandwidth that you can use then you'll get ~520GB/s instead of 560GB/s on XSX while you'll still get 448GB/s on PS5.

That's of course ignoring the usual memory contention problems that will further impact both machines in the same way.
There is no average at all.
If the data needed is in the 360GB/s it needs to wait that data reach no matter if the others data already reached and we’re processed by the GPU.
 
Last edited:

ethomaz

Banned
Thanks for the reply. So your main point is that this game's VRAM requirement at 8K exeeds 10 GB and thus data requests from the slower pool of 336 GB/s results in real world bandwidth significantly lower than PS5 (even if) momentarily right?
My guess from the dev’s words is that one of the issues for 8k on Series X was that they are reaching outside the 10GB split.

If it was not the case he should never say anything about the memory setup making difference between PS5 and Series X… the only difference in PS5 favor is the non-split setup.
 
Last edited:

ethomaz

Banned
Do you understand the concept of average bandwidth?
Do you understand how GPU render works?

If you need to wait the slowest part all the render will be affected and work limited by the slowest part… it won’t works as an average speed.

The difference in speed will just make that GPU waiting (doing nothing) the slowest data to delivery the frame.

If you had an average of 520GB/s it should be way better than have some data stored on the 360GB/s part of the memory.

That is why MS calls 10GB as optimal GPU memory… they don’t want devs to use critical data for render time outside these 10GB because if you do that you will basically waiting the data on the 360GB/s that basically is the same as having the whole memory as 360GB/s.

There is no use for most the data reaching the GPU faster if you have to wait the slowest data to delivery the frame.
 
Last edited:

Heisenberg007

Gold Journalism
Read my previous post, #551.

For your question - sure, why wouldn't I? That doesn't mean that this little indie game with some of the most basic graphics you'll ever see (though I do love the art style) couldn't do 8K on a 12TF machine if he tried.

The dev didn't say its not possible on xbox like you seem to imply.
Nor did he say that it is possible like you seem to imply. The two things he said helped make it possible are only available on PS5.
 

Hoddi

Member
My guess from the dev’s words is that one of the issues for 8k on Series X was that they are reaching outside the 10GB split.

If it was not the case he should never say anything about the memory setup making difference between PS5 and Series X… the only difference in PS5 favor is the non-split setup.
I don't think memory is the issue as the game only uses 3-4GB of VRAM at 8k on my PC. That small dip I've highlighted (1.3GB) in this screengrab is at native 4k.

395vcWD.png



Edit:

I just tried it on my old GTX 1060 system. 4k60 runs at just 50% GPU load and 8k locks to 30fps.
 
Last edited:

Lysandros

Member
Do you understand how GPU render works?

If you need to wait the slowest part all the render will be affected and work limited by the slowest part… it won’t works as an average speed.

The difference in speed will just make that GPU waiting (doing nothing) the slowest data to delivery the frame.

If you had an average of 520GB/s it should be way better than have some data stored on the 360GB/s part of the memory.
I think he just says that this 'wait' will lower the average (as in real world app/performance of the pool/bandwidth) 'by nature', that's not necessarily contrary to what you are saying.
 
Last edited:

ethomaz

Banned
I don't think memory is the issue as the game only uses 3-4GB of VRAM at 8k on my PC. That small dip I've highlighted (1.3GB) in this screengrab is at native 4k.

395vcWD.png
The dev said it is an issue.
I’m guessing why it in issue… not that it is an issue.

But your results are interesting… can you use an accurate GPU memory usage tool with overlay? Rivatunner + Afterbunner works.
 
Last edited:

Lysandros

Member
Thus, I think it should be pretty clear that the consoles are very close to each other, each with respective weaknesses and strengths. Some game engines will work better on the one, while others will work better on the other. I'd wager this tit for tat will go on for most of the generation with the XSX taking a slight lead in most titles on average, especially once game engines are optimised take advantage of features the XSX has that the PS5 does not. Even then, the difference will be way smaller then the XONE/PS4 or XOX/Pro difference.
And the same can not happen for PS5 because the Geometry Engine, Cache Scrubbers, I/O Complex, the Tempest engine are 100% laveraged as for now?..
 
Last edited:

Lysandros

Member
I don't think memory is the issue as the game only uses 3-4GB of VRAM at 8k on my PC. That small dip I've highlighted (1.3GB) in this screengrab is at native 4k.

395vcWD.png
Thanks, i was thinking the same thing; 'just how a such a texture light game can use more than 10 GB of VRAM even at 8K.' To be clear i am not objecting to developers comment about the memory setup of XSX at all, i just think that an engine doesn't 'need' to exceed 10 GB of limit to encounter bandwidth problems on the machine's on 560/336 GB/s pool. Maybe at some operations simultaneous access is enough to drop it below PS5 levels. Maybe contention is simply more of a problem on XSX, with higher impact.
 
Last edited:

Hoddi

Member
Thanks, i was thinking the same thing; 'just how a such a texture light game can use more than 10 GB of VRAM even at 8K.' To be clear i am not objecting to developers comment about the memory setup of XSX at all, i just think that an engine doesn't 'need' to exceed 10 GB of limit to encounter bandwidth problems on the machine's on 560/336 GB/s pool. Maybe at some operations simultaneous access is enough to drop it below PS5 levels. Maybe contention is simply more of a problem on XSX, with higher impact.
Ya, I've edited my post to include my old GTX 1060. There's something very strange about this game not hitting 8k60 on the XSX as it should be at least 3x faster than that.

My 2080 Ti runs it at 100fps+.
 
Last edited:

onQ123

Member
Ya, I've edited my post to include my old GTX 1060. There's something very strange about this game not hitting 8k60 on the XSX as it should be at least 3x faster than that.

My 2080 Ti runs it at 100fps+.
What's the pixel fillrate of your system?
 

Lysandros

Member
Ya, I've edited my post to include my old GTX 1060. There's something very strange about this game not hitting 8k60 on the XSX as it should be at least 3x faster than that.

My 2080 Ti runs it at 100fps+.
Based on TF count, yet this game is very ALU light. With 48 ROPs at 1700 MHz GTX 1060 has a pixel fill rate of 82 Gpixels/s, which is a more relevant metric here. Now compare that to XSX' 116 Gpixel/s. That's only ~40% more... By the way GPU load readings can easily be misleading here since the large ALU part of the GPU is barely used like i said hence the readings i suppose, this doesn't mean that ROPs, rasterizers, prim units etc. aren't pushed hard.
 
If devs are willing to rewrite games for PS5 but not Xbox Series X that only make the outlook for Xbox Series X even worse.

Interesting choice to keep glossing over the fact the Xbox Series X version came at launch where it was well documented by DF that lots of aspects of Series X development were actually quite a ways behind where PS5 development was. And about a few months before the Series X version (3 months before) launched the dev for this game themselves acknowledged to DF that the PC and Xbox conversions of Touryst were about "getting to grips with DX12" In other words, they could just be scratching the surface of utilizing DX12.



On PC, you can select your own preferred resolution of course but curiously, there is an anti-aliasing option. By design, The Touryst is relatively light on GPU resources, so Shin'en's chosen AA technique is brute-force super-sampling, which is to say that if you run at native 4K with AA enabled, the game is actually rendering internally at 8K instead and downscaling to your display resolution. In most other respects, The Touryst is essentially the same as the Switch version - just stripped of a dynamic resolution scaling system that is no longer required.

The one exception is shadow map resolution and associated resolution, which is significantly improved. Xbox One X and the PC version support 4096x4096 shadow maps with three cascades, which drops to 3584x3584 on the standard Xbox One. There's refinement then, but fundamentally, The Touryst doesn't really need improved shaders, extended draw distances or higher precision post-process pipelines. These ports from Switch deliver superb results simply from a push to higher resolution and improved shadow quality - though it would be interesting to see actual native 8K or indeed ray tracing added to a prospective Xbox Series X conversion. Shin'en Entertainment tells us that these conversions were all about getting to grips with DX12 - so why not DXR too?

So what we essentially have here is confirmation that the Xbox One versions of Touryst are switch ports, and about 3 months later they took that Xbox One X work and ported it to the Series X GDK. But in the case of PS5 we have actual developer confirmation to Digital Foundry that they didn't just port the PS4 version of the game to PS5, they instead rewrote the game's engine for the PS5's low level APIs, something it's clear Series X did not benefit from nearly a whole year from the time the game released for Series X.

As to why the switch port hit 8K on PC but not Series X, the answer is pretty obvious, more RAM. PCs have more available RAM between GPU VRAM and system RAM so this is how PCs with weaker GPUs than what's inside Series X (slower GPU clock speeds also) can still achieve 8K resolution in this game. Consoles don't have their memory designed the way PCs do, and this is even more true for a system like Xbox Series X with its unique asymmetric memory setup, making a proper engine rewrite all the more important for that console, but that's not what Series X received. It got an Xbox One port of a switch port.
 

onQ123

Member
Interesting choice to keep glossing over the fact the Xbox Series X version came at launch where it was well documented by DF that lots of aspects of Series X development were actually quite a ways behind where PS5 development was. And about a few months before the Series X version (3 months before) launched the dev for this game themselves acknowledged to DF that the PC and Xbox conversions of Touryst were about "getting to grips with DX12" In other words, they could just be scratching the surface of utilizing DX12.





So what we essentially have here is confirmation that the Xbox One versions of Touryst are switch ports, and about 3 months later they took that Xbox One X work and ported it to the Series X GDK. But in the case of PS5 we have actual developer confirmation to Digital Foundry that they didn't just port the PS4 version of the game to PS5, they instead rewrote the game's engine for the PS5's low level APIs, something it's clear Series X did not benefit from nearly a whole year from the time the game released for Series X.

As to why the switch port hit 8K on PC but not Series X, the answer is pretty obvious, more RAM. PCs have more available RAM between GPU VRAM and system RAM so this is how PCs with weaker GPUs than what's inside Series X (slower GPU clock speeds also) can still achieve 8K resolution in this game. Consoles don't have their memory designed the way PCs do, and this is even more true for a system like Xbox Series X with its unique asymmetric memory setup, making a proper engine rewrite all the more important for that console, but that's not what Series X received. It got an Xbox One port of a switch port.
How did I gloss over this? My comment was that if devs are willing to rewrite their games for PS5 but not for Xbox Series X it's a worse outlook for Xbox Series X.
 
Ya, I've edited my post to include my old GTX 1060. There's something very strange about this game not hitting 8k60 on the XSX as it should be at least 3x faster than that.

My 2080 Ti runs it at 100fps+.

I'm glad more people are seeing this lol.



The answer is right here when the game first launched on Xbox One and PC. Xbox and PC got a switch port, and 3 months after this the Series X version launched, no engine rewrite, but a port of the Xbox One game that was already a port from switch. Without the benefit of a proper engine rewrite of course the Series X didn't reach maximum potential. But people would have you believe otherwise, that the Series X really high clock speed is simply too slow (even though slower PC GPUs hit 8K in this game) The biggest culprit is memory and the engine not being rewritten for Series X.

PCs have more available RAM due to the combination of system memory and GPU VRAM. On the consoles the GPU VRAM IS the system memory, and there tends to be much less of it compared to what you get in a PC setup.


The Touryst is essentially the same as the Switch version - just stripped of a dynamic resolution scaling system that is no longer required.

The one exception is shadow map resolution and associated resolution, which is significantly improved. Xbox One X and the PC version support 4096x4096 shadow maps with three cascades, which drops to 3584x3584 on the standard Xbox One. There's refinement then, but fundamentally, The Touryst doesn't really need improved shaders, extended draw distances or higher precision post-process pipelines. These ports from Switch deliver superb results simply from a push to higher resolution and improved shadow quality - though it would be interesting to see actual native 8K or indeed ray tracing added to a prospective Xbox Series X conversion. Shin'en Entertainment tells us that these conversions were all about getting to grips with DX12 - so why not DXR too?

Notice confirmation of the Xbox versions being ports? The Series X version is itself a port from Xbox One. PS5 didn't get a PS4 port. It got a version of the game that had its engine rewritten for PS5's low level APIs.
 
How did I gloss over this? My comment was that if devs are willing to rewrite their games for PS5 but not for Xbox Series X it's a worse outlook for Xbox Series X.

Keep in mind I stated that you were glossing over the possibility the developer didn't have time to rewrite considering the Series X version came at launch last year November, only 3 months after the game released on Xbox One and PC. We know the state of Xbox development tools were behind according to DF. Maybe if the developer had as much time as they did with the PS5 version perhaps the game would have received a rewrite.
 

Tripolygon

Banned
So what we essentially have here is confirmation that the Xbox One versions of Touryst are switch ports, and about 3 months later they took that Xbox One X work and ported it to the Series X GDK. But in the case of PS5 we have actual developer confirmation to Digital Foundry that they didn't just port the PS4 version of the game to PS5, they instead rewrote the game's engine for the PS5's low level APIs, something it's clear Series X did not benefit from nearly a whole year from the time the game released for Series X.

As to why the switch port hit 8K on PC but not Series X, the answer is pretty obvious, more RAM. PCs have more available RAM between GPU VRAM and system RAM so this is how PCs with weaker GPUs than what's inside Series X (slower GPU clock speeds also) can still achieve 8K resolution in this game. Consoles don't have their memory designed the way PCs do, and this is even more true for a system like Xbox Series X with its unique asymmetric memory setup, making a proper engine rewrite all the more important for that console, but that's not what Series X received. It got an Xbox One port of a switch port.
What are you on about?

The game is a Switch game that came out in 2019, subsequent versions are ports derived from the switch version.

Switch does not use DirectX 12, which is Microsoft Windows and Xbox graphics API so their engine had to be "redone" to take advantage of DirectX API for Windows and Xbox One. They also moved to GDK which is Microsoft's newer development environment and their engine was also tuned, redone, optimized to take advantage of Series X DX12 API.

Neither PS4 nor PS5 use DX12 so again the engine has to be redone to use GNM/X which is the PS4 API, assuming PS5 uses an updated version of GNM/X.

A graphics API is just a translation layer that allows the software to make use of the underlying GPU hardware.

So is your argument that the GDK and DirectX12 API on Series X is what is preventing Xbox Series X from running the game at 8K 60fps?
 
Last edited:

Hoddi

Member
What's the pixel fillrate of your system?
I'm not sure. I have a power cap on my 2080 Ti so it usually hangs in the 1500-1600mhz range. That should place it in the 130-140 gpixels range.

Based on TF count, yet this game is very ALU light. With 48 ROPs at 1700 MHz GTX 1060 has a pixel fill rate of 82 Gpixels/s, which is a more relevant metric here. Now compare that to XSX' 116 Gpixel/s. That's only ~40% more... By the way GPU load readings can easily be misleading here since the large ALU part of the GPU is barely used like i said hence the readings i suppose, this doesn't mean that ROPs, rasterizers, prim units etc. aren't pushed hard.
The 1060 has 48 ROPs but only 32 rasterizers on the front-end. The ROPs on Pascal were tied to the number of memory channels and so they were 48 because the 1060 was a 192-bit card. It could still only rasterize 32 pixels/clock.

You're right about the GPU utilization though I'll add that 8k30 is twice as much as 4k60. 50% utilization at 4k60 follows rather closely with that.
 
Last edited:

MrLove

Banned
She was right.

KzQH8Wc.png

And Ali was right too


If I understood correctly, is Traflaps the final defining factor over GPU power? Or what do these floating points mean? How would you describe it for a user who doesn't understand all of these?

I think it was a bad PR move to put all these information out. This technical information does not matter to the average user and is not a final judgement over GPU power.

Graphics cards, for example, have 20 different sections, one of which is Compute Units, which performs the processing. If the rest of the components are best put to use in the best possible way, and there are no other restrictions, there is not bottleneck in memory, and as long as the processor has the necessary information, 12 Tflops can be achieved. So in an ideal world where we remove all the limiting parameters, that's possible, but it's not. ( he means we cannot remove all bottlenecks and 12 Tflpos only remains on paper)

A good example of this is the Xbox Series X hardware. Microsoft two seprate pools of Ram. The same mistake that they made over Xbox one. One pool of RAM has high bandwidth and the other pool of RAM has lower bandwidth. As a result, coding for the console is sometimes problematic. Because the total number of things we have to put in the faster pool RAM is so much that it will be annoying again, and add insult to injury the 4k output needs even more bandwidth. So there will be some factors which bottleneck XSX’s GPU.

 
Last edited:
What's the pixel fillrate of your system?

If slower GPUs with worse fillrate than Series X can run the game at 8K just fine, it means the reasons for why it isn't 8K on Series X are being narrowed down. The culprit is 10GB @ 560GB/s and 6GB @ 336GB/s. The engine needed to be rewritten according to Series X's design to get the most from it. Even a GTX 1060GB on PC will benefit from more available RAM because of not only GPU VRAM, but also system memory.

Fortunately for Series X 8K gaming will not be the norm this generation, but I'm quite certain this game isn't the first time the memory layout has impacted Series X in the performance department. It's why I personally can't wait for more devs to begin using Sampler Feedback Streaming for their games. Microsoft I hope can make it more accessible. It soon will be for Unreal 5 as Coalition is working with Epic directly to get stuff like that implemented.
 

DenchDeckard

Moderated wildly
The only facts we know are that it is 8K 60 on PS5 and it isnt on Xbox. Maybe they patch it and the whole conversation goes out of the window and changes. but ultimately this is where we are at right now.

Now saying that, if a PC with a 2080 can run it at 8K 60 fps...I really cant get my head around how these consoles cant, if coded for them. so it does make me lean towards wondering what is going on with the series version. I also wonder, does the game have the optimized for Series X and Series S badge..because if it does, don't they have to be using the series consoles API?
 
Last edited:

Zathalus

Member
And the same can not happen for PS5 because the Geometry Engine, Cache Scrubbers, I/O Complex, the Tempest engine are 100% laveraged as for now?..
The Geometry Engine is the exact same thing as the Geometry Engine found in any RDNA card, there is absolutely no evidence of this being customised any further by Sony. Even the presentation by Mark Cerny does not allude to this, he discusses the exact feature set that the new Geometry Engine introduced in RDNA. Refer to page 6-9 here.

Cache Scrubbers are handled by the GPU itself and game developers do not need to do anything to take advantage of it, as Cerny himself noted on The Road to PS5 from 19:10 on.

The I/O complex is not going to assist with GPU demanding tasks, it is there to facilitate streaming of data from the SSD directly into RAM, it can certainly help with faster asset loading and this is a solid advantage for the PS5.

Tempest engine is just 3D audio, and the XSX has custom dedicated 3D audio processors of its own.

The features I were referring to are directly responsible for assisting with GPU loads, such as VRS. Which is used to great effect in Doom Eternal. Sampler Feedback Streaming can also have a positive performance impact, as can Mesh Shaders. In the case of Mesh Shaders PS5 has comparable technology via Primitive Shaders, so I expect minimal difference there.

Even with those advantages I expect game to range from being better on the PS5 to being better on the XSX, depending on the game engine and various other factors, with the overall advantage going to the XSX. Once again, the differences should not be anything major.
 

Dodkrake

Banned
I didn't say he was using all of them, just that he himself said that he rewrote the engine for the PS5 specifically to take advantage of the hardware.

Because the PS5 has way better low level API access and allows for massive efficiency gains. This is why iphones, a few years back, ran so smoothly on crap hardware when compared to their android counterparts.
 

Mr Moose

Member
The only facts we know are that it is 8K 60 on PS5 and it isnt on Xbox. Maybe they patch it and the whole conversation goes out of the window and changes. but ultimately this is where we are at right now.

Now saying that, if a PC with a 2080 can run it at 8K 60 fps...I really cant get my head around how these consoles cant, if coded for them. so it does make me lean towards wondering what is going on with the series version. I also wonder, does the game have the optimized for Series X and Series S badge..because if it does, don't they have to be using the series consoles API?
Yeah.


FAjHIeMWQAIeeEK
 

onQ123

Member
If slower GPUs with worse fillrate than Series X can run the game at 8K just fine, it means the reasons for why it isn't 8K on Series X are being narrowed down. The culprit is 10GB @ 560GB/s and 6GB @ 336GB/s. The engine needed to be rewritten according to Series X's design to get the most from it. Even a GTX 1060GB on PC will benefit from more available RAM because of not only GPU VRAM, but also system memory.

Fortunately for Series X 8K gaming will not be the norm this generation, but I'm quite certain this game isn't the first time the memory layout has impacted Series X in the performance department. It's why I personally can't wait for more devs to begin using Sampler Feedback Streaming for their games. Microsoft I hope can make it more accessible. It soon will be for Unreal 5 as Coalition is working with Epic directly to get stuff like that implemented.
Ori is also 6K on Xbox Series X so it might be the sweet spot
 
Top Bottom