• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

kyliethicc

Member
Or to play resident evil at higher framerate than the X1X despite it having less RAM. Again it's almost like RAM is not the end all be all.


What you meant and what you typed was different. It. Does. Not. Need. 10GB. Of. RAM. (this is silly) To run games at 1080p bro. My point still stands. On top of that once devs start using SFS and VA RAM issues will be even less important.

People are acting like the XSS is supposed to run games exactly like the XSX with less memory. News flash! It isn't. The funny thing is despite its lack of memory it's clearly out performing last generation's consoles just like it was designed to. It did need to or was designed to be a carbon copy of Sony's strategy.
Yes, when a game is designed for 5 GB of RAM, a console with 7.5 GB will do just fine.

Happy Birthday Good Job GIF
 

jroc74

Phone reception is more important to me than human rights
Do you compare entering a bossroom and loading into the world?
Whot? Yeah, bet you played the game to know about it :D
Exactly, lol. I just realized they tried to pass that off as the same thing. Sad...

Like Returnal, there are a couple of instances of loading:

Using a telepad in the level (has an animation, not as fast as the other 2 but still fast)
Going to a new biome (really fast)
A shortcut to the boss in biome 2. (really fast)

Each have different loading speeds. Every type of loading wont have the same speeds. But each are fast as hell.
 
Last edited:

Boglin

Member
Better give up. It seems like Sony and Microsoft (for the X) wasted a lot of money for having that much RAM in their consoles.
It is pretty clear by now, that in 5 years we will still have games that require the same amount of RAM as today. At least going by what he is saying.
Microsoft have deep pockets and have world class engineers on their hardware teams so they don't need to make compromises. If they choose to have 10GB or 16GB of memory it's because there is no benefit to having any more than that. They can literally download more ram with velocity architecture.
 

Panajev2001a

GAF's Pleasant Genius
Microsoft have deep pockets and have world class engineers on their hardware teams so they don't need to make compromises. If they choose to have 10GB or 16GB of memory it's because there is no benefit to having any more than that.
They are good engineers because they make compromises to keep designs under a certain cost and are able to bring these systems mass production.
XSS having only 8 GB of RAM at full speed and 2 GB at much reduced speed is a compromise, XSX having only 10 GB of RAM at full speed and 6 GB at much reduced speed is a compromise (you can say they chose it because they bet that the cost requires for more and faster RAM was too high for the returns it gave, but it is still a pragmatic compromise or choice if that word sounds too ugly).

They can literally download more ram with velocity architecture.
You have the same super fast SSD access/ XVA on the XSX so if you hit its limits you will have problems on XSS unless you design for XSS first as the lead platform and scale bits and bobs up afterwards (which will not get XSX’s peak).

Using the SSD as extended memory is fast but nowhere near as fast as the normal RAM is.
 

Rea

Member
Exactly, lol. I just realized they tried to pass that off as the same thing. Sad...

Like Returnal, there are a couple of instances of loading:

Using a telepad in the level (has an animation, not as fast as the other 2 but still fast)
Going to a new biome (really fast)
A shortcut to the boss in biome 2. (really fast)

Each have different loading speeds. Every type of loading wont have the same speeds. But each are fast as hell.
Personally, i don't even think that Returnal has loading, the particles animation effects shit when you teleport, is purposely delayed, to let the player see.
LoL
 

Boglin

Member
thank fuck I was worried lol
Yeah, sorry. My post was meant to be more of a ribbing at the staunch Xbox fans who think MS is infallible.

Truth be told though, I think the Xbox Series X is an amazingly designed piece of kit for having a cost effective SoC that functions well for a game console as well as for their xCloud servers. I'm also impressed that they were willing to leave the typical console form factor behind in favor of their tower design that by volume offers much better cooling while staying quiet. The compromises they did make with the separate ram speeds are hugely justifiable for their $500 box and the higher level API is paying off well with BC titles.

Both Microsoft and Sony have extremely competent engineers and I guess it grates on me when one side's fanboys act like the other side is run by a bunch of idiots. The amount of people who think Sony went out of their way to develop an end-to-end I/O solution for nothing more than to reduce loading times by a couple of seconds astounds me.
 

Lysandros

Member
I mean look at the PS5 DE at just $400. Full specs, just no discs. Great price.

Microsoft wanted to hit the $300 price point, fine. So trim the specs A LITTLE bit from the XSX.

They could have just done this...

8 CPUs 16t @ 3.6 GHz
40 CUs @ 1.6 GHz (8 TF)
16 GB @ 448 GB/s
500 GB SSD @ 2.4 GB/s

.. and still sold it for $300. Thats still a noticeable cut down spec from their $500 box, unlike the PS5 DE at $400. And it wouldn't have fucked the devs as much. And it would have been able to run XONEX games via BC cuz it'd have same 40 CUs and enough RAM.

(Or they could have just sold a digital XSX for $400.)
How would Microsoft sell a configuration having the double (unified) bandwidth, 6 GB more RAM, twice the teraflops compared to series S for just 300$? I find this highly unlikely.
 

roops67

Member
640K ought to be enough for anybody.
Classic! Lot of people made Bill Gates out to be a genius, he was nothing more than a opportunist crook that was at the right place at the right time. He often missed the mark like that '640k is enough for anybody' and the internet is nothing more than a fab that will never take off, some genius he was! His biggest crime was creating Microsoft, anyway atleast he's making up for his past now as a philanthropist
 

Jemm

Member
640K ought to be enough for anybody.
Classic! Lot of people made Bill Gates out to be a genius, he was nothing more than a opportunist crook that was at the right place at the right time. He often missed the mark like that '640k is enough for anybody' and the internet is nothing more than a fab that will never take off, some genius he was!

There is no proof, that he ever said that:
Gates himself has strenuously denied making the comment. In a newspaper column that he wrote in the mid-1990s, Gates responded to a student's question about the quote: "I've said some stupid things and some wrong things, but not that. No one involved in computers would ever say that a certain amount of memory is enough for all time." Later in the column, he added, "I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again."
Source: The '640K' quote won't go away -- but did Gates really say it? | Computerworld
 
Last edited:

jroc74

Phone reception is more important to me than human rights
Personally, i don't even think that Returnal has loading, the particles animation effects shit when you teleport, is purposely delayed, to let the player see.
LoL
Exactly.

Like some said, devs said this could happen. Doesnt mean its needed, just means thats what the devs want us to see.
 

Riky

$MSFT
'No one wants to play old games' Jim Ryan would be proud! MS is bad for having backward compatibility!

Or to play resident evil at higher framerate than the X1X despite it having less RAM. Again it's almost like RAM is not the end all be all.


What you meant and what you typed was different. It. Does. Not. Need. 10GB. Of. RAM. (this is silly) To run games at 1080p bro. My point still stands. On top of that once devs start using SFS and VA RAM issues will be even less important.

People are acting like the XSS is supposed to run games exactly like the XSX with less memory. News flash! It isn't. The funny thing is despite its lack of memory it's clearly out performing last generation's consoles just like it was designed to. It did need to or was designed to be a carbon copy of Sony's strategy.

You're dealing with people have next to no understanding of what they are talking about, it's best to just laugh at them.
You only have to look at Vram usage on an 8gb card at 1080p to see there is no problem and the RDNA2 hardware support for SFS and Mesh Shaders as well as the joint GDK meaning the minimum spec won't be Series S for first party games means no problem going forward.
Sony's approach is to move lot of data very fast, Microsoft's approach is to move less data just fast.
 
Last edited:


If true, it's likely purely a 6nm redesign to increase production volumes by fabbing more APUs on a second node. That 6nm offers benefits to perf/watt and die size is a cost-benefit Sony will most assuredly pocket. It's possible they can primarily produce the PS5 DE on this 6nm node to shave off a significant chunk of the BOM, thus allowing them to ship considerably more consoles without taking any loss per unit.

I'm wondering if this 6nm is a TSMC node? I've never heard of a 6nm node before.

Edit: Aaaah, here we go. Same design rules as 7nm DUV, 18% higher transistor density but without any change to performance or power consumption. So would be a straight die shrink from the current N7P node:

 
Last edited:

Sinthor

Gold Member
Microsoft have deep pockets and have world class engineers on their hardware teams so they don't need to make compromises. If they choose to have 10GB or 16GB of memory it's because there is no benefit to having any more than that. They can literally download more ram with velocity architecture.
LOL. Good one! That bit about the velocity architecture....took me a minute to process this. You had me going for a while! :)
 
Last edited:

Sinthor

Gold Member
Yeah, sorry. My post was meant to be more of a ribbing at the staunch Xbox fans who think MS is infallible.

Truth be told though, I think the Xbox Series X is an amazingly designed piece of kit for having a cost effective SoC that functions well for a game console as well as for their xCloud servers. I'm also impressed that they were willing to leave the typical console form factor behind in favor of their tower design that by volume offers much better cooling while staying quiet. The compromises they did make with the separate ram speeds are hugely justifiable for their $500 box and the higher level API is paying off well with BC titles.

Both Microsoft and Sony have extremely competent engineers and I guess it grates on me when one side's fanboys act like the other side is run by a bunch of idiots. The amount of people who think Sony went out of their way to develop an end-to-end I/O solution for nothing more than to reduce loading times by a couple of seconds astounds me.
Bingo. Totally agree with this. I'm seriously impressed by both of the new consoles this generation. Fascinating differences between them as well. Will be interesting to see if on philosophy or the other ends up being the 'way of the future.' Or, perhaps, both are just valid and can each be pursued going forward. Should be interesting!
 

SlimySnake

Flashless at the Golden Globes

thats pretty impressive. Didnt locuza estimate the PS5 die was around 300mm2? if not less than that? 20% reduction would bring it down to just 240mm2. only 30 mm2 bigger than the xss. Though with the faster clocks, it will still require a much bigger cooling solution.

Spiderman was topping out at around 205 watts. I am guessing we are looking at a 170 watt PS5 which would still be much higher than the PS4 and PS4 Pro but in line with the x1x. I am guessing the heatsink will see a big reduction in size which means a smaller PS5. maybe a completely different looking one to save on build costs?
 
thats pretty impressive. Didnt locuza estimate the PS5 die was around 300mm2? if not less than that? 20% reduction would bring it down to just 240mm2. only 30 mm2 bigger than the xss. Though with the faster clocks, it will still require a much bigger cooling solution.

Spiderman was topping out at around 205 watts. I am guessing we are looking at a 170 watt PS5 which would still be much higher than the PS4 and PS4 Pro but in line with the x1x. I am guessing the heatsink will see a big reduction in size which means a smaller PS5. maybe a completely different looking one to save on build costs?

LeviathanGamer's wrong on the perf vs power consumption aspects.

As per the link I posted previously, TSMC's 6nm node doesn't offer any performance (i.e. clock speed) or power benefits over their N7P process, only the increase in transistor density which eventuates a lower production cost due to increased production yields from the greater number of chips per wafer.


Even if the 6nm node offered higher performance per watt, Sony wouldn't change the clock ceiling for the GPU or CPU. But instead would pocket cost savings from the lower power consumption of the 6nm redesign.
 

DeepEnigma

Gold Member
thats pretty impressive. Didnt locuza estimate the PS5 die was around 300mm2? if not less than that? 20% reduction would bring it down to just 240mm2. only 30 mm2 bigger than the xss. Though with the faster clocks, it will still require a much bigger cooling solution.

Spiderman was topping out at around 205 watts. I am guessing we are looking at a 170 watt PS5 which would still be much higher than the PS4 and PS4 Pro but in line with the x1x. I am guessing the heatsink will see a big reduction in size which means a smaller PS5. maybe a completely different looking one to save on build costs?
It's really simple.

Save on build costs and, more die per wafer means increased production. Something all of us who don't want to pay scalped prices nor camp websites in wee hours would like to see.
 

DeepEnigma

Gold Member
This STILL doesn't address the fact that the XSS outperforms the X1X despite it having less RAM so your original point about RAM is wrong. Also when you use SFS and VA you will be able to deal with having a smaller RAM pool. You don't need tons of memory for a system targeting 1080p.
Address the fact that the XSS needs to run X1S BC profiles because it lacks the RAM that's needed for the X1X profile. Yes, it's better than the X1X performance wise, due to the SSD/CPU by default running a lower spec profile, but the lack of RAM compared to the X1X shows a shortcoming in contrast.

So RAM does indeed impact scope and game design/features.
*when playing games designed to run on the XB1 which has less RAM than the XSS.

Don't be dense.
This. The mental gymnastics are astounding sometimes. Just take the small L, it will be okay. There is an XSX to hug, the XSS doesn't need so much defense for its anemic RAM footprint.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
LeviathanGamer's wrong on the perf vs power consumption aspects.

As per the link I posted previously, TSMC's 6nm node doesn't offer any performance (i.e. clock speed) or power benefits over their N7P process, only the increase in transistor density which eventuates a lower production cost due to increased production yields from the greater number of chips per wafer.


Even if the 6nm node offered higher performance per watt, Sony wouldn't change the clock ceiling for the GPU or CPU. But instead would pocket cost savings from the lower power consumption of the 6nm redesign.
Oh yeah, I dont think they will increase clocks until a PS5 Pro. They will definitely use it as cost saving measure.

I do wonder if they would wait for the 3nm fabs to mature before making a PS5 Pro. I would personally upgrade in November 2023 but if i have to wait another year for a 20 tflops Pro, thats ok too.
 

yamaci17

Member
This STILL doesn't address the fact that the XSS outperforms the X1X despite it having less RAM so your original point about RAM is wrong. Also when you use SFS and VA you will be able to deal with having a smaller RAM pool. You don't need tons of memory for a system targeting 1080p.
X1X runs at native 1080p, and mostly locks to 60, only places where it drops frames because of its very outdated, bad CPU. This is a point everyone agrees, Series S has a superior CPU and will always manage to get to 60 FPS in the entirety of generation (well, 648, 720p and 810p are a different talk that One X never deigned and never will)

X1X GPU still has the power to push 2.1m pixels at 60 FPS

XSX runs CHEKERBOARDED 1440p. That means the actual render resolution is 1280x1440, which results to 1.8m pixels. In this config, it also mostly locks to 60 FPS.

Lower pixel count actually gives a bit headroom so that it can run ray tracing along with the same resolution.

Sorry, but I don't accept "checkerboard" as a native resolution. It barely looks passable at 4K, and it is not designed to be efficient at 1440p/1080p. I've tried and it looks very bad. Only at 4K, the blocky effects seem to be reduced, but not completely.

But of course, since the "cheap console" crutch is in play, anything from 720p/810p to checkerboarded 1440p (actually, 992p quality) will get a pass for the console, it seems.



X1X also uses the same checkerboarding at 4K mode. And guess what? at 1920x2160 resolution (much higher than 1280x1440), it still holds 45+ FPS.

I don't primarily say that One X is a better console. But it still has a beefy GPU that can still match or outperform Series S GPU.

When and if games start to use full capabilities of RDNA2, then Series S may "slightly" outperform the One X.
 
Last edited:
X1X runs at native 1080p, and mostly locks to 60, only places where it drops frames because of its very outdated, bad CPU. This is a point everyone agrees, Series S has a superior CPU and will always manage to get to 60 FPS in the entirety of generation (well, 648, 720p and 810p are a different talk that One X never deigned and never will)

X1X GPU still has the power to push 2.1m pixels at 60 FPS

XSX runs CHEKERBOARDED 1440p. That means the actual render resolution is 1280x1440, which results to 1.8m pixels. In this config, it also mostly locks to 60 FPS.

Lower pixel count actually gives a bit headroom so that it can run ray tracing along with the same resolution.
Do people agree on this though? There are several people here claiming the X1X is the superior device to the XSS, the Switch too but maybe they are joking. Also I'm pretty sure the X1X does NOT have any raytracing at all yet the XSS does. Like it or not the XSS is more powerful than the X1X despite it being the upgrade to the X1S. All this is without devs utilizing its full suite of features as well.
 

yamaci17

Member
Do people agree on this though? There are several people here claiming the X1X is the superior device to the XSS, the Switch too but maybe they are joking. Also I'm pretty sure the X1X does NOT have any raytracing at all yet the XSS does. Like it or not the XSS is more powerful than the X1X despite it being the upgrade to the X1S. All this is without devs utilizing its full suite of features as well.

I don't say X1X is superior. In fact, in terms of pure gaming performance, XSS will likely catch up and outperform X1X at one point.

Look, here are the simple facts:

- Series S is not meant to handle ray tracing. What you see these days are mere demos. Some games already omit RT from Series S.
- Series S clearly cannot handle RT in this game. It's not because it runs at 1440p. Because it does not actually run at 1440p at all. Pixel count is below 1080P. It practically averages 40 fps with RT enabled. Yeah, still playable, I would play it, but I would not say it's "handling it". And be reminded, this is one of lightest ray tracing games out there and effects are barely perceptible (for me at least)

In this game's case:

X1X manages to stay over 45 FPS at cboarded 4k (4.17m pixels)
XSS manages to lock to 60 FPS at cboarded 1440p (1.8m pixels)
X1X manages to lock to 60 FPS at native 1080p (2.1m pixels)
XSS averages 40 or so FPS at cboarded 1440p with lightest possible RT enabled

In the end, Ray Tracing will also need EXTRA video memory. This is another theory of mine about how they will handle the gap between SX and SS in terms of memory. If they simply cut out RT from the equation, the lack of memory will not be an issue, probably. And they will get away saying "don't expect to run RT with a 300 dollar console!1".

I don't necessarily say buy One X. It's an outdated machine with slow HDD and crap CPU. I would choose Series S as well. But claiming that this game runs better on S is simply wrong. But yeah, you can experience RT on S which is welcoming. But for how long and how sustainable, that is another question.

Edit: Latest video from DF shows that situation is even worse for S, it's dropping below 30 in some scenes with RT enabled. Well, checkerboarded 1080p, off we go then! Welcome, 960x1080 blocky rendering! (as if 1280x1440 was not blocky to start with)
 
Last edited:

SlimySnake

Flashless at the Golden Globes
X1X runs at native 1080p, and mostly locks to 60, only places where it drops frames because of its very outdated, bad CPU. This is a point everyone agrees, Series S has a superior CPU and will always manage to get to 60 FPS in the entirety of generation (well, 648, 720p and 810p are a different talk that One X never deigned and never will)

X1X GPU still has the power to push 2.1m pixels at 60 FPS

XSX runs CHEKERBOARDED 1440p. That means the actual render resolution is 1280x1440, which results to 1.8m pixels. In this config, it also mostly locks to 60 FPS.

Lower pixel count actually gives a bit headroom so that it can run ray tracing along with the same resolution.

Sorry, but I don't accept "checkerboard" as a native resolution. It barely looks passable at 4K, and it is not designed to be efficient at 1440p/1080p. I've tried and it looks very bad. Only at 4K, the blocky effects seem to be reduced, but not completely.

But of course, since the "cheap console" crutch is in play, anything from 720p/810p to checkerboarded 1440p (actually, 992p quality) will get a pass for the console, it seems.



X1X also uses the same checkerboarding at 4K mode. And guess what? at 1920x2160 resolution (much higher than 1280x1440), it still holds 45+ FPS.

I don't primarily say that One X is a better console. But it still has a beefy GPU that can still match or outperform Series S GPU.

When and if games start to use full capabilities of RDNA2, then Series S may "slightly" outperform the One X.

Great post. I have been saying this for a while and everyone mocked me, but the 4 tflops RDNA 2.0 GPU was always going to be WEAKER than the 6 tflops Polaris GPU.

The only IPC gains RDNA cards received over GCN were the 1.25x gains from Polaris to RDNA1.0. RDNA 1.0 to 2.0 saw no IPC gains, only perf per watt gains.

So in reality, that 4.0 Tflops XSS GPU is only a 5.0 Polaris Tflops GPU. the X1x GPU is 20% more powerful. MS knew that and thats why they gave all BC titles on the xss the X1s profiles. So no RDR2 at native 4k on this console. And now we are finally seeing what this means on GPU bound games.
 

yamaci17

Member
Great post. I have been saying this for a while and everyone mocked me, but the 4 tflops RDNA 2.0 GPU was always going to be WEAKER than the 6 tflops Polaris GPU.

The only IPC gains RDNA cards received over GCN were the 1.25x gains from Polaris to RDNA1.0. RDNA 1.0 to 2.0 saw no IPC gains, only perf per watt gains.

So in reality, that 4.0 Tflops XSS GPU is only a 5.0 Polaris Tflops GPU. the X1x GPU is 20% more powerful. MS knew that and thats why they gave all BC titles on the xss the X1s profiles. So no RDR2 at native 4k on this console. And now we are finally seeing what this means on GPU bound games.
Well, RDNA2 features will surely bolster both consoles' performance, such as mesh shading, variable rate shading, and probably more in the background. I would say it would catch up with X1X, but that's not an impressive feat anyways. :messenger_grinning_sweat:

6 TFLOPS RDNA2 would be a lot more future proof, along with those featureset. 350 dollar, a bit bigger, a bit bulkier would surely do the job!
 

kyliethicc

Member
Great post. I have been saying this for a while and everyone mocked me, but the 4 tflops RDNA 2.0 GPU was always going to be WEAKER than the 6 tflops Polaris GPU.

The only IPC gains RDNA cards received over GCN were the 1.25x gains from Polaris to RDNA1.0. RDNA 1.0 to 2.0 saw no IPC gains, only perf per watt gains.

So in reality, that 4.0 Tflops XSS GPU is only a 5.0 Polaris Tflops GPU. the X1x GPU is 20% more powerful. MS knew that and thats why they gave all BC titles on the xss the X1s profiles. So no RDR2 at native 4k on this console. And now we are finally seeing what this means on GPU bound games.
Well, RDNA2 features will surely bolster both consoles' performance, such as mesh shading, variable rate shading, and probably more in the background. I would say it would catch up with X1X, but that's not an impressive feat anyways. :messenger_grinning_sweat:

6 TFLOPS RDNA2 would be a lot more future proof, along with those featureset. 350 dollar, a bit bigger, a bit bulkier would surely do the job!

It would have been interesting if the S had a 40 CU GPU instead of 20 CUs. Same as One X.

Clock it around 1.5 GHz and thats still a decent perf gap with its big bro, but not nearly as much.

If the S had 40 CUs and 12-16 GB RAM, it could have been a decent lower spec unit.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Well, RDNA2 features will surely bolster both consoles' performance, such as mesh shading, variable rate shading, and probably more in the background. I would say it would catch up with X1X, but that's not an impressive feat anyways. :messenger_grinning_sweat:

6 TFLOPS RDNA2 would be a lot more future proof, along with those featureset. 350 dollar, a bit bigger, a bit bulkier would surely do the job!
I think if they really wanted a cheaper alternative, an 8 tflops would be something i can live with. I dont think it wouldve held back any games on the xsx. In case where the xsx has to settle for 1440p 30 fps like the UE5 demo, the xss wouldve been a 1080p 30 fps title. This whole idea that games would be native 4k on next gen is kinda bizarre since we are already seeing cross gen games having to settle for checkerbaording and ray traced games like watch dogs dropping all the way down to 1440p 30 fps.

So yeah, $399. 8 tflops compete with Sony's DE.
 
Last edited:

kyliethicc

Member


Hopefully it’s a slim version.



If true, it's likely purely a 6nm redesign to increase production volumes by fabbing more APUs on a second node. That 6nm offers benefits to perf/watt and die size is a cost-benefit Sony will most assuredly pocket. It's possible they can primarily produce the PS5 DE on this 6nm node to shave off a significant chunk of the BOM, thus allowing them to ship considerably more consoles without taking any loss per unit.

I'm wondering if this 6nm is a TSMC node? I've never heard of a 6nm node before.

Edit: Aaaah, here we go. Same design rules as 7nm DUV, 18% higher transistor density but without any change to performance or power consumption. So would be a straight die shrink from the current N7P node:


This is cool news. It's probably just a way for them to increased chip production. But they might not even call it the slim model yet or even redesign the exterior. They might not make this a publicly announced thing either. Why just fight for 7nm wafers when there's 6nm wafers to buy too? Both can do the job.

I see a scenario where there just end up being PS5s with either 7nm or 6nm SoCs, all with same specs and design. The only way a user would ever know if their PS5 has a 7 or 6 nm chip would be a teardown lol. Maybe the power draws will differ a little. I think a full slim model with a noticeably smaller die, lower power draw, and or more storage is still unlikely before 2023. This will likely just be called the CFI-1100 model, not CFI-2000 model (slim).
 
Last edited:

DeepEnigma

Gold Member
I see a scenario where there just end up being PS5s with either 7nm or 6nm SoCs, all with same specs and design. The only way a user would ever know if their PS5 has a 7 or 6 nm chip would be a teardown lol. Maybe the power draws will differ a little.
They'll use a different model, like A or B at the end of the model number. They will need to for the FCC submission.

So the box should be able to tell you if the case remains the same. Maybe they will make the 6nm in all black too.
 

Riky

$MSFT
This STILL doesn't address the fact that the XSS outperforms the X1X despite it having less RAM so your original point about RAM is wrong. Also when you use SFS and VA you will be able to deal with having a smaller RAM pool. You don't need tons of memory for a system targeting 1080p.

If you have both consoles it becomes very obvious that Series S outperforms X1X consistently. Gears 5 is a good example, it has higher settings on Series S and maintains a more solid framerate it also runs at 120fps in multiplayer, twice the framerate.
Cyberpunk the Series S smokes the One X with a solid 30fps whilst One X dips badly. Warzone runs at twice the framerate on Series S, Watch Dogs has Ray Tracing and Valhalla runs smoother in both the 30fps mode and has a performance mode at twice the framerate One X has. There are more examples but you get the idea.
 

kyliethicc

Member
They'll use a different model, like A or B at the end of the model number. They will need to for the FCC submission.

So the box should be able to tell you if the case remains the same. Maybe they will make the 6nm in all black too.
Well ya PS5 model numbers now are CFI-1000A (disc) and CFI-1000B (digital).

This would just be CFI 1100 instead of 1000.

Like the launch PS4 Pro was CUH-7000, then there was the 7100 and 7200 minor redesigned models.

For example in 2014-15:

304jOsV.jpg
 
Last edited:

sncvsrtoip

Member
XSX runs CHEKERBOARDED 1440p. That means the actual render resolution is 1280x1440, which results to 1.8m pixels. In this config, it also mostly locks to 60 FPS.

Lower pixel count actually gives a bit headroom so that it can run ray tracing along with the same resolution.
but you have to remember that checkerboard (or whatever capcom resolution interpolation algorithm is) is not free and also has some performance penalty
 
Status
Not open for further replies.
Top Bottom