• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF - Assassin's Creed Valhalla: PS5 vs Xbox Series X/ Series S Next-Gen Comparison!

Bogroll

Likes moldy games
And setting for ps5 and xsx are high according to Alex. Not medium setting . So ps5 is outperforming 5700xt
Here's the same with high settings, yes the Ps5 is out performing in resolution as it should but its where I expect it. Its not punching above its weight.
 

silent head

Member
UPDATE: The 15 per cent performance advantage mentioned here is averaged across a specific cross-section of play. As the graphs show, 'in the moment' differences can be as high as 25 per cent.]

 

Radical_3d

Member
why is this happenning the TF differrence shud be able compensate for the lack of good tools.

Listen, we coders suck at parallel processing. The theory is right, the tools are known, but sometimes it’s just not as easy. It can be a problem of the architecture, the libraries, the third party software and even the management. So of course a faster weaker system is going to be more utilised than a broader and more powerful one. Also, I think the “only 10GB fast” design was a mistake.
 
Last edited:

assurdum

Banned
Listen, we coders suck at parallel processing. The theory is right, the tools are known, but sometimes it’s just not as easy. It can be a problem of the architecture, the libraries, the third party software and even the management. So of course a faster weaker system is going to be more utilised than a broader and more powerful one. Also, I think the “only 10GB fast” design was a mistake.
A faster weaker can't provide the comparable level of performance because is more utilised
 

Radical_3d

Member
A faster weaker can't provide the comparable level of performance because is more utilised
As you can see in the OP, it can. If you use 9 out of 10TF in a narrow design and 8 out of 12TF in a broad one you are loosing with the most powerful machine. I know it shouldn’t be like that but look, the PS3 had 9 processors back in 2006 and still Intel beated way faster AMD CPUs because of single core performance this very last generation. Games released few years ago and in the DF reviews of the CPUs is the same story. It’s not just that easy.
 
There are two possibilities here. Either:

- Phil Spencer had 3-4 years to prepare for next gen but inexplicably winded up with not a single new XGS title to launch with their new hardware. Somehow development tools are not ready either, despite MS specialising in this area and being the creators of DX12. If that is true he should be sacked as like David Jaffe said, this is total incompetence and taking the piss out of their own fans.

- Tflops, which has been marketed for the past year as the ultimate metric for performance, is in reality far from it and just one part of the real world picture, and a theoretical one to boot. Xbox fans have been taken for a ride with this marketing as in reality PS5 outperforms the XsX quite handedly.

Which is it? I think it's probably a mix of both. The unifying truth here is incompetence, either in leadership or marketing/design.
Yep, when people say "its because the old tools available" they are basically stating MS incompetence with their new console launch
 

ethomaz

Banned
That's not true at all.
Vsync at 60FPS waits for a frame to be finished between 16.6ms, it doesn't matter if it has finished in 16.5 ms or in 10 ms. The only thing vsync needs to be activated at 60fps is to consistently have higher than 60 FPS but that doesn't mean that uncapped framerate would be 70 or 80FPS.
That is not what I said.
VSync itself has a cost for the GPU.
If you GPU is rendering at exactly 60fps and you turn VSync on the framerate will drop to below 60fps and the VSync won’t give you the results you expect.

To turn VSync on your framerate need to be higher enough to when you turn it on the cost to it works not make the framerate drop below 60fps.

15% was just an example based in the other post tests on PC... it could be 5% that makes it need to be just over 64fps.
 
Last edited:

assurdum

Banned
As you can see in the OP, it can. If you use 9 out of 10TF in a narrow design and 8 out of 12TF in a broad one you are loosing with the most powerful machine. I know it shouldn’t be like that but look, the PS3 had 9 processors back in 2006 and still Intel beated way faster AMD CPUs because of single core performance this very last generation. Games released few years ago and in the DF reviews of the CPUs is the same story. It’s not just that easy.
It doesn't work like this. What you say is a generic nonsense.
 
Last edited:

VFXVeteran

Banned
Thank God you're here, we needed an amateur armchair analysis

Well, you know how some of the guys are... "imagine later in the gen when we can get full native 4k with 50 dynamic lights in the scene all running every RT feature there is at the same framerate!!!"
 

VFXVeteran

Banned
Isn't PC and Series X versions virtually the same code? That's purpose of new GDK and GameCore. Wouldn't that mean that if there is bug in XBox version it would be also present in PC version? Do PS4 and XOne version have similar performance drops with torch?

That *could* be an issue... but I know the cost of adding another dynamic light source into the scene. Every single object rendered has to now go through yet another light loop to determine shadows for each of the objects. It's just that expensive so the *cost* of it is valid. I would not point it to something optimized about DX12. They could try to optimize that code, but I doubt it as it plays too close to the rendering engine. My 3090 is already running at 96%+ GPU usage. There isn't much more to squeeze out.

I will add one more thing. Valhalla stutter also on world map. Is map render that costly? There is clearly something wrong with some scenarios that tank xbox performance. And that don't look like some bottleneck but some software bug.

Transparancies are also VERY costly. And this map is a 3D one, not 2D.
 

Three

Member
Yep, when people say "its because the old tools available" they are basically stating MS incompetence with their new console launch
It's the usual "hope and future promises " narrative for xbox. Xbox sales sucked? "Phil only just took over from Mattrick he will right this ship" .
There are no games? "They only just bought studios they are coming". They have poor performance in games? "They only just got dev tools out it will get better".

Never looking at the product right now.
 

Radical_3d

Member
Is not that weaker and it has it's own advantage.
Talk about generic nonsense. If both opeates at full output is weaker. In 10GB of RAM the bandwidth advantge is significative. The raw output of operations is 20% bigger in this theorical scenario and in the end that is what you need to render. There is no two ways arroun it. Tools aside (right now the XSX is operating 40% worse than its theorical differential with PS5, this is going to change over time), the logical explanation is that PS5 is way more efficient, that is it's own advantage. An architecture to take the most out of those 10.3 TF.
 

Aladin

Member
D1KqFYP.png
 
XSX is certainly lacking some optimizations.

But why the hell screen tearing is making a comeback??? This guy supposed to die last gen. Guess jump to 120 FPS brought the ancient evil back :messenger_tears_of_joy:
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
So they run their consoles as virtual machines? I did wonder, but hadn't read it anywhere.

If so that's probably the most straightforward explanation. There may be no issues in the GDK per se, it's just inherently less efficient - but NOT egregiously so. Probably would also explain MS's drive for performance as they need the overhead.

It is an optimised version of what they described here AFAIK: https://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

Nick Baker: There was lot of bitty stuff to do. We had to make sure that the whole system was capable of virtualisation, making sure everything had page tables, the IO had everything associated with them. Virtualised interrupts.... It's a case of making sure the IP we integrated into the chip played well within the system. Andrew?

Andrew Goossen: I'll jump in on that one. Like Nick said there's a bunch of engineering that had to be done around the hardware but the software has also been a key aspect in the virtualisation. We had a number of requirements on the software side which go back to the hardware. To answer your question Richard, from the very beginning the virtualisation concept drove an awful lot of our design. We knew from the very beginning that we did want to have this notion of this rich environment that could be running concurrently with the title. It was very important for us based on what we learned with the Xbox 360 that we go and construct this system that would disturb the title - the game - in the least bit possible and so to give as varnished an experience on the game side as possible but also to innovate on either side of that virtual machine boundary.

We can do things like update the operating system on the system side of things while retaining very good compatibility with the portion running on the titles, so we're not breaking back-compat with titles because titles have their own entire operating system that ships with the game. Conversely it also allows us to innovate to a great extent on the title side as well. With the architecture, from SDK to SDK release as an example we can completely rewrite our operating system memory manager for both the CPU and the GPU, which is not something you can do without virtualisation. It drove a number of key areas... Nick talked about the page tables. Some of the new things we have done - the GPU does have two layers of page tables for virtualisation. I think this is actually the first big consumer application of a GPU that's running virtualised. We wanted virtualisation to have that isolation, that performance. But we could not go and impact performance on the title.

We constructed virtualisation in such a way that it doesn't have any overhead cost for graphics other than for interrupts. We've contrived to do everything we can to avoid interrupts... We only do two per frame. We had to make significant changes in the hardware and the software to accomplish this. We have hardware overlays where we give two layers to the title and one layer to the system and the title can render completely asynchronously and have them presented completely asynchronously to what's going on system-side.

System-side it's all integrated with the Windows desktop manager but the title can be updating even if there's a glitch - like the scheduler on the Windows system side going slower... we did an awful lot of work on the virtualisation aspect to drive that and you'll also find that running multiple system drove a lot of our other systems. We knew we wanted to be 8GB and that drove a lot of the design around our memory system as well.
 

kuncol02

Banned
and a year ago phil spencer said he took his console home with him.
And that's possible. If it was running only games written in XDK and not new GDK. That was probably one of first finished units. We know that GDK was not finished even in July this year, and we know that from official MS documentation. First info about Series S was in June release (in note that GPU profiling for Lockhat is curently not working in some scenarios)

That *could* be an issue... but I know the cost of adding another dynamic light source into the scene. Every single object rendered has to now go through yet another light loop to determine shadows for each of the objects. It's just that expensive so the *cost* of it is valid. I would not point it to something optimized about DX12. They could try to optimize that code, but I doubt it as it plays too close to the rendering engine. My 3090 is already running at 96%+ GPU usage. There isn't much more to squeeze out.

Transparancies are also VERY costly. And this map is a 3D one, not 2D.
I'm not convinced about that map. AFAIK it's exactly same map as in Origins and Odyssey. They were displayed without tearing and dips on standard XOne.
 

ethomaz

Banned
Virtualization is nothing new and is not a problem (unless they seriously fucked something upgrading software).
Virtualization has overhead... the hardware call needs to pass by the virtualization logic to reach the actual hardware.

Said that typical Hypervisor overhead is about 1-5% for CPU and 5-10% for memory.
 
Last edited:

Md Ray

Member
I want to address that torch.

It takes away a good 10FPS from a 3090 on 4k/Ultra. The torch is an expensive render.. why? Because having more than 1 shadow casting light source wreaks havoc on ALL GPUs. It's a shame that that is the case but here we are where creating shadows continues to be very expensive. PS5 drivers are just really good at the dynamic res and probably drops resolution down enough to keep the FPS high.
The resolution can actually be lower in busier sections on XSX than PS5.
 

FrankWza

Member
Well, you know how some of the guys are... "imagine later in the gen when we can get full native 4k with 50 dynamic lights in the scene all running every RT feature there is at the same framerate!!!"

I don’t doubt the PS5 will be able to handle this especially with that custom SSD. It’s not going to happen until they stop with cross gen though.
 

Panajev2001a

GAF's Pleasant Genius
Lot's of people in this thread when in reality virtualization has probably less effect on performance than API differences between PS5 and XSX.

Sure, never stated otherwise :). It is still not free nor trivial. The focus is on creating a fast generalised platform bot the fastest custom one.
 

DeepEnigma

Gold Member
vsync causes stuttering when the frame can't be output at the exact interval, it will then need to wait a full frame interval, causing a stutter. VRR is fine with black levels as long as the frametimes aren't too far off the ideal frametimes.

I have never seen "stuttering" with a good vsync solution. I've seen framepacing, but that is totally different.
 

ethomaz

Banned
Lot's of people in this thread when in reality virtualization has probably less effect on performance than API differences between PS5 and XSX.
It is still has overhead.
It is impossible to do virtualization at free.

A hardware call over virtualization will needs more CPU cycles than a direct call... a API call over virtualization will have more overhead than a API call direct to the hardware.

There is a lot of beneficies on virtualization and the performance is really great but you lose a bit of the hardware performance for that.
 
Last edited:

scydrex

Member
AC Games are terrible at optimization wanna let you guys know. AC Unity ran better on the Xbox One then on PS4. Does not mean that the console is weaker there is more to this than just that.

The Xbox One CPU runs at 1.75ghz and the PS4 runs at 1.6ghz don´t know how much will that affect the performance but is a bit higher... In this case the XSX is running a higher clock CPU and the GPU has 2TF more...
 
Top Bottom