• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

geordiemp

Member
Yes still just rumors but MLiD and RGT(?) stated from industry insiders that concessions were made for MS to fit (squeeze) 54 CU's onto the XSX APU. They ripped out the circuitry for the CU's (or individual shader cores) to alter/tune it's frequencies around it's base clock, this also probably included the 'pervasive fine-grain clock gating' too that AMD had shown on their RDNA2 talk. These deleted features are important for efficiency and keep the GPU within it's TDP when trying to work flat out, may explain partly why XSX is under performing

... where the heck is geordiemp geordiemp when we need him
Hiya, The CU size is likely the same on both consoles plus or minus not a lot, XSX did save die space mainly by cramming more CU into 4 shader arrays and more stuff is shared per CU is a easy way to think about it.

The frequency control and fine gating will take up some small amount die, but I doubt we would see that in rough CU size.
 

Bo_Hazem

Banned
The LOD thing interests me, with UE5 demonstrated on the PS5 they stated no more need for LODs.

Yes, there is literally no LOD system as it's working in a frame polygon budget instead, copying the Sony Atom View tech from 2017, probably due to collaboration. It's the new method that will squeeze more out of current GPU's, but needs pretty fast data streaming. By PCIe 5.0 with 32GB/s speeds "base" PS5 will still be keeping up with its 17-22GB/s throughput.

UE5 stated no more manual authoring of LODs. And presumably continuous LOD refinement.
The former doesn't say anything about existence of LODs (ie. we're all still using texture LODs, and will continue to do so as long as texture sampling is used, including UE5 - but virtually no-one manually generated them in decades).
The latter just means LOD transitions are smooth - it does not mean eliminating discrete LODs (see textures again). While some computational models exist where the lines are a bit blurrier, even for continuous refinement there's almost always some discrete data points represented internally, especially anything that has to operate in realtime.

I know you are a Big Shot here, and know much more than I do. But according to what I've watched already from Atom View and UE5, it's a frame budget system instead with 1 asset version.
 
Last edited:

Bo_Hazem

Banned
Market Cap is nonsense in terms of value.
You can change value by 100bln with 100 shares and 1mln trading.
130bln market cap - Sony (1400 subsidaries, several dozens facilities, 30% world music share, secind biggest movie/picture entertainment, many real estate, 50% cmos sensor world share and no1 share OLED viefinders etc).
30bln market cap - some company that have one brick game.
Sony assets:

I never udnerstand market cap phenomenon on web
:)
Like Elon Musk last "richest man". He's not richest. He only owns some % of stock that skyrocket with computers help. If he want sell those shares value will down -80%. Market cap of Tesla is nonsense because is bigger than Toyota + VW (Audi, Lambo, Skoda, Bentley, Ducatti) + Mercedes-Benz, and Tesla have no profits and much lower revenue (30 times lower than those companies)

Yup, Market Cap sounds like Teraflops.
 

assurdum

Banned
Jump from PS2 to PS3 was huge and PS3 was a premium product for its time. This time console specs are good but they are not premium and they play it is safe with the specs, price POV.
PS3 was far from premium in multiplat performance especially in the first wave. The mess around the specs of this console is something else. Kutaragi has completely lost his mind at the time. Can you imagine a PS3 without a "proper" GPU as it was initially intended? Hearing him a derivative of the PS2 GPU was enough.
 

Ptarmiganx2

Member
I agree. I don't know why people are discussing the idea of a PS5 Pro when we only just start to begin to see what the PS5 is capable of.
Right now Developers are still in the Evolution faze.
It will be a year or 2 before we start to see Developers change there code to be Revolution.
So, I don't see a PS5 Pro happening. Especially when the PS5 is more than capable to last up to 6 years.
CaIjQ6f.jpg

It's better to use techniques like Checkerboard Rendering than to go out and spend millions to develop a Pro model.
The technology isn't powerful enough for a Pro model to make a huge difference anyway.
PS6 6-7 years from now makes more sense.
Logically this is correct. However, my illogical mind would buy one in a heartbeat even if there was minimal improvement. 😁
 

assurdum

Banned
once again, Tom and Alex are baffled by the performance drops in the XSX version. Alex literally says they dont know why Bungie would allow these drops. Like are you fucking kidding me? Bungie isnt allowing shit. The XSX simply isnt able to run the game at that framerate at all times. Imagine if Alex reviewed a GPU where the GPU isnt able to do locked native 4k 60 fps in a benchmark test and then blamed the benchmark instead of the GPU for the poor performance.

This is getting ridiculous tbh. This is like the 15th game they have reviewed which has performance problems on the xsx and while I am open to the tools conspiracy theory or excuse or whatever you want to call it, I think it's fair to say that at the moment the XSX isnt as capable as the PS5. There is no need to appear surprised or shocked or make excuses or blame developers. Very bizarre.
Man you are talking about a site can't discern weird fps ps5 issue without call out the less TF narrative (the last COD) but when series X is involved it's always a "why why why" by them. They are just looking to the high specs numbers and nothing more as a common person would do.
 
Last edited:

PaintTinJr

Member
UE5 stated no more manual authoring of LODs. And presumably continuous LOD refinement.
The former doesn't say anything about existence of LODs (ie. we're all still using texture LODs, and will continue to do so as long as texture sampling is used, including UE5 - but virtually no-one manually generated them in decades).
The latter just means LOD transitions are smooth - it does not mean eliminating discrete LODs (see textures again). While some computational models exist where the lines are a bit blurrier, even for continuous refinement there's almost always some discrete data points represented internally, especially anything that has to operate in realtime.
You are probably correct, but...

What if they've found a way to encode a model as a 2D signal at each unique quaternion position - (for each channel/characteristic) into an equation? They could then take the four fragment corner locations positions of the pixel they want to render, and then do integrations against those equations to get the channel value for each pixel.

Or at least, having watched Sony's atom view tech video, that's the type of solution I'd be looking at for the rendering - so that each frame's background render budget was roughly constant, and independent of model count or complexity.
 

Duchess

Member
Sony have clearly done some magic (read: calculated optimisations) with the PS5, because the machine is performing so much better than many of us anticipated.

I honestly expected the XSX to stomp all over it when I looked at the specs of both machines, but we're not seeing that happen.

Perhaps things will change in a couple of years' time, but I have the feeling PS5 will always be one step ahead. Ratchet and Clank will give us a glimpse of what to expect in the future. Good job, Cerny.
 

DJ12

Member
Is the XBSX really the most powerful?, is the real question.
Mark Cerny said: "This continuous improvement in AMD technology means it's dangerous to rely on teraflops as an absolute indicator of performance."
708syXr.jpg

Maybe the Flops and CUs in the PS5 are more powerful than in the XBSX.
Or the PS5 is just better engineered.
It's fair to compare PS5 and xbox as they are or are at least reported to be the same tech. But as people only look at terraflops, which is pretty meaningless for games, and nothing else they add 2+2 and get 5.

In other metrics more relevant to gaming PS5 has a substantial lead, which is being shown in the results we keep seeing.

I imagine if series x ran at a similar clock speed it would comfortably be better than PS5, but they zigged when they should've zagged. I guess this is the advantage Sony had with being on board with RDNA from the start. They completely understood how to use the technology to get the best out of it. There's a reason AMD also run these RDNA2 cards fast.
 

Neo_game

Member
PS3 was far from premium in multiplat performance especially in the first wave. The mess around the specs of this console is something else. Kutaragi has completely lost his mind at the time. Can you imagine a PS3 without a "proper" GPU as it was initially intended? Hearing him a derivative of the PS2 GPU was enough.

They made some bad decision for sure and were too ambitious. It was selling at a loss for even 599$ I think and that was some than 14yrs ago. Now they are happy with 399$ budget.
 

Fafalada

Fafracer forever
I know you are a Big Shot here, and know much more than I do. But according to what I've watched already from Atom View and UE5, it's a frame budget system instead with 1 asset version.
To be clear I said nothing about that - it seems obvious it's using a virtualized asset in that demo.
My point was just that algorithmically - discrete data detail points are virtually impossible to avoid, and even where they can be - it's not necessarily desirable or it can even be detrimental to do so.
Obviously from the user perspective, the whole point of asset virtualization is to use it as continuous dataset - but I was responding to the discussion on how hardware interacts with it - and that's where the underlying implementation matters.
Eg. - MegaTexture was an example of single-asset virtual-texture - that still worked with discrete LODs internally, for all of the above mentioned reasons.

What if they've found a way to encode a model as a 2D signal at each unique quaternion position - (for each channel/characteristic) into an equation? They could then take the four fragment corner locations positions of the pixel they want to render, and then do integrations against those equations to get the channel value for each pixel.
It's entirely possible they encode their geometry into 2d arrays (there was some patent collateral around it that appeared related I recall reading on it around the time demo broke), and there's been prior research on the subject. But that doesn't really say much about performance characteristics of real-time sampling from that data-set.
And without getting very explicit about implementation details it's hard to say much more. Given they showed detail scaling from something around 1pixel/mm2 all the way to 2-4km viewing distance in that demo - you'd be hard pressed to argue sampling everything from the top-level would not be detrimental to performance (whether constantly, or just degrade the farther things get from camera).
 

sircaw

Banned
Ladies and gents, please let me give you all some sound advice...

Don't get Malaria... or Typhoid.

They suck ass.

I currently have both. Plus an additional bacterial infection, you know, for good measure.

Sorry I know this is OT, but I just need to vent a bit.

I remember when i lived in Zambia and my dad caught malaria, it was like 30 degrees and he had a blanket and a duvet on top of him, he was ice cold and shivering. Took weeks for him to recover his strength, he was very close to not making it.

We used to sleep with these massive nets over our beds, its seems one of the little fuckers got in.

Hope you have a speedy recovery.
 
Ladies and gents, please let me give you all some sound advice...

Don't get Malaria... or Typhoid.

They suck ass.

I currently have both. Plus an additional bacterial infection, you know, for good measure.

Sorry I know this is OT, but I just need to vent a bit.

Holy crap I hope you do get better. I lived in a country with Japanese encephalitis, dengue fever and malaria. I know how nasty those diseases can be and it's why I hate mosquitos so much.
 
Yes still just rumors but MLiD and RGT(?) stated from industry insiders that concessions were made for MS to fit (squeeze) 54 CU's onto the XSX APU. They ripped out the circuitry for the CU's (or individual shader cores) to alter/tune it's frequencies around it's base clock, this also probably included the 'pervasive fine-grain clock gating' too that AMD had shown on their RDNA2 talk. These deleted features are important for efficiency and keep the GPU within it's TDP when trying to work flat out, may explain partly why XSX is under performing

... where the heck is geordiemp geordiemp when we need him
Isn't one of MLID's whiteboard topics "AMD vs MS"? I'm really curious about that, regardless how much of it turns out true or not.

What made you think Xbox made a mistake? Why couldn't it be because the Playstation team did a good job?

It seems you assumed incorrectly that Microsoft is better at making consoles than Sony, when Sony is the actual Hardware company. At some point you need to stop trying to find blame, and give credit to the engeneers at Playstation who did their job well.

Yes, the Xbox team is "proud", but they were never basing their pride on anything. You need to re-evaluate the capability of both sides and change your assumptions.

This is kind of an inaccurate perspective considering where the companies are in modern times. They're both hardware & software companies, you can't have one without the other in the fields these companies do business in. Otherwise you oversimplify their fields of R&D, expertise, etc.

As well, if (and that's a very big if) the pattern between MS and Sony's platforms insofar as 3P performance hold out for the rest of the year, or even up until Summer or Fall, and it turns out it really is 100% down to root hardware differences, it means both you and Empire Strikes Back could be right, it wouldn't have to be one or the other.

All of that said, we still need more time before making such long-term definitive statements. Maybe things shake out on MS's end and we start to see 3P games taking leads (however large or small) on Series X more regularly. Or all the same, maybe Sony maintains the lead there or that lead even grows. No one can actually say for certain at this time how that will play out, we'll need at least a few more months of 3P releases before establishing a good basis there.

One thing I will say is that we should be seeing some big advancements on next-gen games this year across both platforms as developers start to unloosen the shackles of last-gen requirements gradually over time, meaning they can actually target the next-gen hardware more predominantly.

Actually that’s false, LODs are still there, the big difference this gen is that it’ll be invisible to the human eye. How?

By drawing and rendering polygons at the micro-level (breaking down triangles in an asset to the size of a pixel and then make changes to LOD in real-time as you’re walking through the level, depending on how to close to the object you are and you won’t even notice it happening).

This is where PS5’s I/O solution and cache coherency play a MAJOR role.

(timestamped)



Like NX Gamer said, if you were to run this demo on PC rn with the exact same micro-polygon density as PS5, you would need like 40GB of VRAM. But because of the ultra-high speed SSD, the process is virtualized, pretty much what MLiD said in one of his videos, the PS5 has 825GB of DDR2 RAM thanks to how fast the storage architecture is.


MLID is not 100% right about that equivalence. It's not just about how fast the storage is in terms of bandwidth. Yes systems like PS5 are "mimicking" parallelized random access by having more channels (12 channels in Sony's case) and probably specifically choosing NAND that upper-class latency figures (Toshiba NAND is usually pretty good for that), etc., but NAND is never going to have the level of low latency actual DDR2 RAM does.

There's a lot more to RAM than just the bandwidth; accessing it in the first place always incurs a hit to the cycle costs, then there's other factors like bank activation timing etc. NAND simply can't compete with DDR RAM on any of that, I'd even say DDR1 is better than any NAND devices on that point even if the actual bandwidth is low by today's standards.

Also we should keep in mind that decompression isn't "free"; offloading it from CPU (either wholly or in majority) is a massive benefit the consoles have which PC doesn't (though they can brute force it with more system RAM and good-enough SSDs), but it's still going to cost some cycles to process in and decompress that data, and a few more cycles to write it into system GDDR6 memory. So in those areas, actual RAM, be it DDR2 or whatever, is always going to have the real-time advantage in terms of cycle cost savings.

That said, the SSD I/O in the next-gen consoles is, again, a massive step up from 8th-gen systems and prior, and coming pretty close to cartridge-based systems like SNES, MegaDrive, PC-Engine and Neo-Geo. We'll just need to wait until a new generation of NAND modules with even better latency figures and random access timings (and better interconnects/processing elements with lower latency and more power for quicker decompression and DMA write accesses) comes along for us to get SSD I/O with not only bandwidth that can match or exceed older DDR memories, but with real-world performance that actually cuts into that volatile memory space as well.

And I think that'll eventually happen in a couple of years, even without NVRAM (ReRAM, Optane, MRAM etc.).
 
Last edited:

roops67

Member
Ladies and gents, please let me give you all some sound advice...

Don't get Malaria... or Typhoid.

They suck ass.

I currently have both. Plus an additional bacterial infection, you know, for good measure.

Sorry I know this is OT, but I just need to vent a bit.
That's really rough going through that hell! Just how you manage to catch it in this day and age, you live somewhere where it's prevalent? Sorry to pry, don't answer if you don't want to. Keep your spirits up, get well real soon!!
 

SlimySnake

Flashless at the Golden Globes
Ladies and gents, please let me give you all some sound advice...

Don't get Malaria... or Typhoid.

They suck ass.

I currently have both. Plus an additional bacterial infection, you know, for good measure.

Sorry I know this is OT, but I just need to vent a bit.
Damn. How do you even get typhoid?

Hope you feel better soon.
 

DJ12

Member
I haven't watched the video but I will take a wild guess and say that it was Alex that said the Xbox doesn't seems to use DRS?
So this "sharper" image might just be his bad vision™ which is strangely always adding benefits to Xbox here and there, which is weird otherwise it would hapen to other consoles as well.
That being said coming from a guy that can't recognize Raytracing in video games is it that surprising?
Hasn't both VGTech and that spanish youtuber both said PS5 also has better AA?
 
"Spanish youtuber" 😆

so you doubt DF but not this spanish youtuber? and ofcourse PS5Tech who just says anything that comes to his head without showing any proof

After the recent mistakes that Digital Foundry have made it wouldn't be prudent to take their word as gospel. Like with all tech analysts you need to take what they say with a grain of salt. Although there are some that I trust more than others. Like I trust Digital Foundry over someone like Dealer for example.
 

DJ12

Member
When you have multiple outlets corroborating similar results, it's a stronger argument than authority.
Just look at what they missed in Dirt 5, only thing they spotted was the dramatically reduced quality settings in 120hz mode.

Better AF, LOD distance further away, higher res textures and overall higher average resolution on PS5 version were totally missed (or deliberately omitted) not to mention the really poor implementation of VRS on Series X all missed by the "authority".

First doesn't mean best, DF really need to raise their game. Others, like them or not, are really highlighting DF's deficiencies and general lack of attention to detail. 'PS5Tech' just posts his videos and results. If there are differences he mentions them, but the evidence is there in the videos to back it up, it's not like he's making it up.

"Spanish youtuber" 😆

so you doubt DF but not this spanish youtuber? and ofcourse PS5Tech who just says anything that comes to his head without showing any proof
Two different corroborating sources, it better than trusting the word of people that cannot even see the differences in quality mode on Dirt 5. They don't portray themselves as amateurs, but their analysis certainly is.

But if the omissions make you feel better, then you enjoy their "analysis"
 
Last edited:
It was a demo, not a final game. Not hard to understand that what we saw will bear no resemblance to what we see next. So its pretty dumb to use an unfinished game compared to a finished game. But go ahead and look stupid ;)
Getting a little personal there buddy. Not hard to understand that a Demo as in demonstration was 5 months away from release.

You actually think what we saw in July would have been enough time to polish it and have it ready for Xbox Series X launch and still be considered a title that showcases next gen?

Obviously you know the answer already. Hence why there was such a backlash from gamers. Delaying it was the best outcome for everyone who wants to play it.

The only dumb thing was the decision to Demo Halo Infinite in July, expecting gamers to NOT be disappointed with the state the game was in. Hopefully it has improved significantly since.
 

DJ12

Member
For everyone that believes "Tools" are the answer, please read this:

What is the Difference Between DirectX 11 and DirectX 12 | Hardware Times

DirectX 12 Ultimate: How is it Different from DirectX 12?​

DirectX 12 Ultimate is an incremental upgrade over the existing DirectX 12 (tier 1.1). Its core advantage is cross-platform support: Both the next-gen Xbox Series X as well as the latest PC games will leverage it. This not only simplifies cross-platform porting but also makes it easier for developers to optimize their games for the latest hardware.

The way people talk in this thread is as though it's completely different to what they've been using since in the xbox SDK since 2015. It's not new, it's not completely foreign to devs as has been made out previously, it's not something they "need to get a handle on" or something they need to learn, they know it, it does what it does, and that's it.

Expecting something good to come with better tools, is not really going to happen I'm sorry to say.

DirectX is fantastic for BC, the code doesn't care one shit what hardware it's running on if it's got all the same features (This is the "Forward Compatibility" tag line from the previous gen), this is why series X can run things at higher res, with better quality without any fuss what so ever from the previous gen. It also means, that there is a layer taking performance away from gaming at all times, this layers interference gets smaller other APIs like Mantle initially, and latterly Vulkan get huge wins over DX by reducing the overheads which forces MS to update the API.

But the GDK being "new" is a total fallacy, it's renamed and updated SDK and won't make much difference, if any.
 
Last edited:

cudiye

Member
My dualsense is acting up. Both R2 and L2 triggers are very sensitive to light touch. Playing ps4 games is very difficult when a slight touch of the triggers affects the game. The trigger effects work fine in cold war so i dunno. Anyone else? And before u ask, i did drop the controller once so idk 😂
 

PaintTinJr

Member
.....


It's entirely possible they encode their geometry into 2d arrays (there was some patent collateral around it that appeared related I recall reading on it around the time demo broke), and there's been prior research on the subject. But that doesn't really say much about performance characteristics of real-time sampling from that data-set.
And without getting very explicit about implementation details it's hard to say much more. Given they showed detail scaling from something around 1pixel/mm2 all the way to 2-4km viewing distance in that demo - you'd be hard pressed to argue sampling everything from the top-level would not be detrimental to performance (whether constantly, or just degrade the farther things get from camera).
But if it was equations they were recovering from disk, and regenerating the data on the fly, by supplying coordinates(as integration limits for each viewport pixel) then the performance would be constant IMO, because the reconstruction (integrations) would be 1 per viewport pixel - per channel/characteristic - so even if the source asset was created with 20million polygons in zbrush, the AtomView Data representation - in my hypothetical solution - would just be 360(deg) x 180 (deg) x model channel count equations, and they'd only need to retrieve the equation sets that match the specific orientation/quaternion (180 around x and 360 around y) of the model and reconstruct at each viewport pixel by integrating between limits for each pixel.

In the scenario where the entire model was so distant it only covered a couple of pixels, my hypothetical solution would have virtually no data to retrieve from disk and only do a handful of calculations (per pixel per channel); whereas the traditional polygon renderer would have to retrieve and evaluate most of the lowest model LOD level data for various parts of the rendering pipeline, to eventually shade something that was mathematically wrong.

And if the model was used more than once at the same orientation angle, then the equations retrieved would be the same, and just the number of reconstruction calculations - with different integration limits - would change from each instance in the frustum. But the workload per viewport pixel should remain quite constant.
 

M1chl

Currently Gif and Meme Champion
For everyone that believes "Tools" are the answer, please read this:

What is the Difference Between DirectX 11 and DirectX 12 | Hardware Times



The way people talk in this thread is as though it's completely different to what they've been using since in the xbox SDK since 2015. It's not new, it's not completely foreign to devs as has been made out previously, it's not something they "need to get a handle on" or something they need to learn, they know it, it does what it does, and that's it.

Expecting something good to come with better tools, is not really going to happen I'm sorry to say.

DirectX is fantastic for BC, the code doesn't care one shit what hardware it's running on if it's got all the same features (This is the "Forward Compatibility" tag line from the previous gen), this is why series X can run things at higher res, with better quality without any fuss what so ever from the previous gen. It also means, that there is a layer taking performance away from gaming at all times, this layers interference gets smaller other APIs like Mantle initially, and latterly Vulkan get huge wins over DX by reducing the overheads which forces MS to update the API.

But the GDK being "new" is a total fallacy, it's renamed and updated SDK and won't make much difference, if any.
Does not mean, they solved XSX performance stack. Obviously it's same for devs, however for MS, to tune it on the background that's the issue. I have my doubts that they are ready, they were late, far behind Sony and on the top of that, since they have to do themselves, to improve performance stack, incrementally. I've been in development around X1X, when it was announced and it was wild, we in Warhorse Studios received nightly builds so it does not run like shit. It was that time when they have to take into account different memory setup on X1X (in comparison to eSRAM on X1S), lot of things were buggy, underperfoming. So I am not sure if I can trust MS, that they do not fucked it up again.

And again, this is whole big "maybe", but I am just saying that I have this experience and truth is that only in Gears 5 console is really warm outside of that it's somewhat cold, which should attest that internal teams have this "nightly" builds of SDK at their disposal. Gears 5 Hivebuster for example are probably prettiest game on XSX and it runs like a dream.

So here us my 2cents if anyone cares : D

Also this means, that current games, if not build with more mature SDK is not going to improve, games are like ISO of Virtual machine, everything has to be packed there.
 
Last edited:
There is a good chance its winning these cross gen comparisons because the CPU being paired back isnt affecting performance because these cross gen games barely even use the CPU as most PC benchmarks can show us. Once next gen games with heavy physics, massive NPC counts and destruction arrive, we might start to see issues if the variable clocks impact the CPU too much.
So many times it has already been said and written that variable frequencies are not needed to float in values, or because it is impossible to work on stable ones, but in order to maximize the peak of hardware capabilities and the efficiency of each component when necessary. Learn the technical part, please, because the Xbox will have even more problems, because its SoC also has a power limit, but there is no such smart power management as in the PS5. The bottleneck will be exactly in the power limit, when the computational loads on the chip will be unusually high. Thats why Cerny emphasized the need to move away from the old paradigm, due to its low efficiency, to the new one.
Think twice before you write another cool story about issues of variable clocks.

giphy.gif
 

roops67

Member
So many times it has already been said and written that variable frequencies are not needed to float in values, or because it is impossible to work on stable ones, but in order to maximize the peak of hardware capabilities and the efficiency of each component when necessary. Learn the technical part, please, because the Xbox will have even more problems, because its SoC also has a power limit, but there is no such smart power management as in the PS5. The bottleneck will be exactly in the power limit, when the computational loads on the chip will be unusually high. Thats why Cerny emphasized the need to move away from the old paradigm, due to its low efficiency, to the new one.
Think twice before you write another cool story about issues of variable clocks.

giphy.gif
You nailed it, well said

Was working on posting about how deceptive XSX's high 'fixed' CPU clock is cos of it's thermal and power constraints. The PS5 has the same constraints but it can work around them with SmartShift. The XSX can't throttle so it is deferring workloads to when it's CPU is less busy, can't do a lot of work in parallel and will be limited from running many heavy AVX and fmadd like instructions. This here is probably the real reason what devs say about XSX being hard to develop for. More the reason why it's APU is really built for running in server environment where it can run at it's fullest without thermal/power constraints, and 4 Xbox instances it can get the most effective use of it's GPU
 
Last edited:

Garani

Member
Sony didn't stop people from ripping out the APU. Those people could have done this by now.

As for me I would just be happy with actually having a PS5. No way would I sacrifice it for that.
Easy: making an xray is a costly activity, so who ever has done it, it wants to cash out. And I can't fault them for trying to getting paid for the effort.
 

oldergamer

Member
Getting a little personal there buddy. Not hard to understand that a Demo as in demonstration was 5 months away from release.

You actually think what we saw in July would have been enough time to polish it and have it ready for Xbox Series X launch and still be considered a title that showcases next gen?

Obviously you know the answer already. Hence why there was such a backlash from gamers. Delaying it was the best outcome for everyone who wants to play it.

The only dumb thing was the decision to Demo Halo Infinite in July, expecting gamers to NOT be disappointed with the state the game was in. Hopefully it has improved significantly since.
You keep going back to the well on this one. What we saw couldn't even be classified as a "Alpha". It was a gameplay demo, and not even shown on actual hardware! There was no indication it was even using the most up to date assets they had (if you read those supposed developer leaks claim team members held off making check-ins and the build was broken daily.). Clearly not an alpha state, would you agree?

Its a stupid comparison through and through. The people pushing the fall launch date at 343 were obviously incorrect and heads rolled for it and other reasons. It happens in the game industry, so what about it? Was it dumb to show it not on xbox hardware? yes, specifically as it showed little to no polish, and looked less polished then all other 343 releases. There was no point in showing it at the time, and I'm in full agreement on that point. However nobody that compares an unfinished game to a finished game is in the right state of mind. Who does that? Sony fans don't do that with any other game i can think of, despite this one.

However how does that make it valid to compare a pre alpha game demo to a full released final game? it literally makes no sense. People compare finished games to each other, but in this instance sony fans repeatedly refuse to do that for whatever reason it is ( It honestly makes people look stupid). Not only that, but trying to single out how it looked, and using that as some type of graphical bar or representative of the capabilities of the X/S is also stupid. You want to laugh at it, be my guest, as it was a poor showing. A pre alpha game isn't representative of anything but an unfinished game, full stop.
 
Last edited:
Status
Not open for further replies.
Top Bottom