• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

inflation

Member
The other side of the coin though is that NPM is viewed as 'best in breed' for package management and iirc there are more available packages for Node.js then any other language currently. So its a huge ecosystem.
Read some hackernews or reddit you would know how people loved the npm became hating it after it bloated. That "rage quitting" fiasco still shows the instability of repository controlled by a single company.

I would imagine most of the Node.js packages are web related, because that's where it has its major advantages. Python due to its longevity and generality has more FFI binding packages, such as OpenGL, BLAS, numpy.
 

Razvedka

Banned
Read some hackernews or reddit you would know how people loved the npm became hating it after it bloated. That "rage quitting" fiasco still shows the instability of repository controlled by a single company.

I would imagine most of the Node.js packages are web related, because that's where it has its major advantages. Python due to its longevity and generality has more FFI binding packages, such as OpenGL, BLAS, numpy.

I'm pretty connected to the community. I agree with the bloat issue, NPM/Node are victims of their own success. Where possible I avoid pulling down libs with deep dep trees, and otherwise try to keep it light.
 

Razvedka

Banned
He saw me through a tough time in my life, and was my Bestman as well. Nobody is perfect :D



Python is majorly loved in Physics and Math environments. Today's trading is very much based on physics being applied to the stock market (I know, it sucks, but that's what it is nowadays) so this is why trader's roles include Python. For the very same reason it has been injected into adaptive security principles. Problem is that adaptive security isn't about getting a log and doing something with it (you can use Perl, and it's a much stronger string manipulator). Adaptive security is a wider arena that encompass proper asset management (CMDB), configuration base lines, impact analysis, information pathways validation and intelligence feed. Put all together and THEN you apply it to your first line's security operations KABOOM! You have adaptive security.

Problem is that it's fucking expesive (resource wise) and you gotta have management that really buys into it. Once they get word that you are asking for a 20 to 30 people team, to start with, for a project that will start seeing the light of day not before 2 years have passed, everyone disappears.

True. My math/physicist guys really like Python, and other languages like R or Haskell.
 
The two virtual "pools" of memory exist on 10x physical GDDR6 memory chips, which are a mix of 1 and 2GB densities.

The "fast pool", comprises the 4x 1GB GDDR6 chips together with the first 1GB of the 6x 2GB density chips.

The "slow pool" comprises the second 1GB on the 6x 2GB density chips.

Think of the 2GB chips as being partitioned in software, which the first GB accessible in the "fast pool" and the second GB accessible in the "slow pool".

The number of chips is what defines the memory bus width and thus bandwidth, with each chip having 32x pins with 1bit per pin.

So the fast pool has an interface width of = 10 chips x 32 pins per chip x 1 bit per pin = 320bit
Likewise the slow pool width is = 6 chips x 32 pins per chip x 1 bit per pin = 192bit

With 14Gbit/s GDDR6 chips that gives the bw figures of 560 GBytes/s and 336GB/s respectively for the two pools.

As you can tell, both pools are accessed physically through the same pins on each 2GB chip, i.e. a common memory bus, so both pools cannot be accessed at the same time.

So when accessing the second 6GB on the 2GB density chips, i.e. the slow pool, you're doing so though a 192bit interface, thus your bw is limited to 336GB/s only.

I'm pretty sure the GPU can access both memory pools, as otherwise that would be an oversight in design of the highest order. Effectively making the system a non-unified memory architecture with only 10GB of VRAM (putting it at a significant disadvantage to the competition). I'm pretty confident this is not the case.

Both the CPU and GPU can access both pools, but you really want your GPU reading primarily from the fast pool. Otherwise your performance will be tanking hard.

ok, i was reading the digital foundry break down with Andrew Goossen and trying to understand what they said

""Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

So what happens if the GPU needs more than 10Gb?
 
Last edited:

ethomaz

Banned
ok, i was reading the digital foundry break down with Andrew Goossen and trying to understand what they said

""Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

So what happens is the GPU needs more than 10Gb?
Dev will be fucked and need to adapt their code.
 

LiquidRex

Member
Correct me if I'm wrong but with regards to how much RAM is available for Developers on PS5/XSX/XSS is just pure speculation.... aren't NDAs over? , and if so why hasn't a single developer come out and said exactly how much RAM they have available to them. 🤔
 

Captain Hero

The Spoiler Soldier
vwmsVmt.jpg


Just updated my PS5 gaming setup .. what do you think guys? Anything I can do or is it enough?

Only the cables I need to hide them and I’m working on that.
 

ethomaz

Banned
Correct me if I'm wrong but with regards to how much RAM is available for Developers on PS5/XSX/XSS is just pure speculation.... aren't NDAs over? , and if so why hasn't a single developer come out and said exactly how much RAM they have available to them. 🤔
The amount of RAM they have available come from MS DF interview... it is official.
 
What I'm curious about is when the GPU needs more than 10GBs what advantage will the PS5 have in multiplats?

When the GPU needs to access data stored in the slow pool, it will be limited to 336GB/s. For a 12TFLOPs GPU, this ain't nearly enough and would lead to GPU stalls and overall reduced performance.

PS5 has one single pool of memory with higher bandwidth than the slow pool on the XSX. The PS5 GPU has access to the full bandwidth when accessing data stored in RAM. It's GPU cache performance is also higher due to higher clocks and the cache scrubbers, meaning it can cope better with proportionally less memory bandwidth available than the XSX GPU.

ok, i was reading the digital foundry break down with Andrew Goossen and trying to understand what they said

""Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

So what happens if the GPU needs more than 10Gb?

Performance tanks.
 

Mahavastu

Member
vwmsVmt.jpg


Just updated my PS5 gaming setup .. what do you think guys? Anything I can do or is it enough?

Only the cables I need to hide them and I’m working on that.
I see no loudspeakers... A surround system, maybe even something like Atmos is nice for gaming. And the TV looks rather small.

Never ask such questions, we will always find something for you to waste thousands of dollars on :messenger_winking:
 

SlimySnake

Flashless at the Golden Globes


The RT performance of these RDNA 2.0 cards is awful. Simply unacceptable. The $650 6800xt is getting beat by a $500 3070, and the $700 3080 is offering up to 64% more performance. Whats worse is that this is before the DLSS 2.0 performance upgrades. We are looking at the the 3080 roughly being 100% better in RT.

This isnt good news for consoles, and I think Watch Dogs is a good indicator of what to expect from RT in consoles. 1440p 30 fps with 1080p checkerboard reflections. I have no idea what Insomniac is doing with Spiderman to get it running at native 4k 30 fps. It gives me hope that there is some kind of secret sauce in the PS5's RT implementation, but then again they havent done a 1440p 60 fps RT mode which is troubling to say the least. The reflections are also 1080p checkerboard just like the Watch Dogs reflections so i dont know if there is a secret sauce in the PS5.

AMD, MS and Sony shouldve just sucked it up and launched with dedicated rt cores. These are $500 consoles that are selling out at a premium price. It's clear no one really cares about $399 weak machines anymore. PS5 BOM shouldve been $500, not $450.

Performance tanks.
Could this explain the drastic drops at times in high framerate modes? It looks like its holding fine with the PS5 and then boom, massive 20 fps drop before getting back up.
 

LivingD3AD

Member


The RT performance of these RDNA 2.0 cards is awful. Simply unacceptable. The $650 6800xt is getting beat by a $500 3070, and the $700 3080 is offering up to 64% more performance. Whats worse is that this is before the DLSS 2.0 performance upgrades. We are looking at the the 3080 roughly being 100% better in RT.

This isnt good news for consoles, and I think Watch Dogs is a good indicator of what to expect from RT in consoles. 1440p 30 fps with 1080p checkerboard reflections. I have no idea what Insomniac is doing with Spiderman to get it running at native 4k 30 fps. It gives me hope that there is some kind of secret sauce in the PS5's RT implementation, but then again they havent done a 1440p 60 fps RT mode which is troubling to say the least. The reflections are also 1080p checkerboard just like the Watch Dogs reflections so i dont know if there is a secret sauce in the PS5.

AMD, MS and Sony shouldve just sucked it up and launched with dedicated rt cores. These are $500 consoles that are selling out at a premium price. It's clear no one really cares about $399 weak machines anymore. PS5 BOM shouldve been $500, not $450.


Could this explain the drastic drops at times in high framerate modes? It looks like its holding fine with the PS5 and then boom, massive 20 fps drop before getting back up.

I Don’t understand why Sony didn’t go with Nvidia like 7th generation! It would have been fantastic to see PS5 based on Nvidia’s 2020 technologies! Probably $599 price tag at least but totally worth it.
 
I Don’t understand why Sony didn’t go with Nvidia like 7th generation! It would have been fantastic to see PS5 based on Nvidia’s 2020 technologies! Probably $599 price tag at least but totally worth it.
They will go with Nvidia for what ? Nvidia doesn't have any APU like AMD have.
 

priba76br

Neo Member
Split Memory is sub optimal, most games will be using >10gb especially at 4K which will cause huge huge bottlenecks going forward.

GPU isn´t looking any better too but the situation isn´t as dire as RAM.
10GIGs for video is enough, otherwise everybody that bought a 3080 with 10Gigs of VRAM will have a bad time with 4K... lOl. Do you really think that PS5 will allocate more to video? PS4 already reserved 3- 3.5GB for the OS, the nicer and fully featured PS5OS will use less? How about higher quality sound with the tempest engine and more advanced game logic? I believe neither console will use more than 10GB for video.
 

Mahavastu

Member
I Don’t understand why Sony didn’t go with Nvidia like 7th generation! It would have been fantastic to see PS5 based on Nvidia’s 2020 technologies! Probably $599 price tag at least but totally worth it.

Price.
The die size of the PS5 SoC is about 300mm2, the XSX SoC is about 360mm2 (only 47% of that for GPU). Compare this to the 650mm2 of the 3080, and thats without CPU. Thats nearly twice as large, more if you add the CPU and IO blocks.
We talking here about quite some price advantage for AMDs solution here, for pretty good performance. I am sure both PS5 and XSX will be able to keep up in a 6-7 years generation.

2 other things:
- nVidia has no "fast enough" CPU solution. I think they just bought ARM, but it will still take years until they are overall competitive to AMD
- Microsoft used nVidia for the OG XBox, Sony for the PS3. I read that both were not happy with nVidia and more happy with ATI/AMD, especially with the pricing and the flexibility to include their own changes.
 
Last edited:

On Demand

Banned
as predicted he dropped some interesting nuggets
  • Sony being coy with the PS5 internals and deep technical features was mostly because of the bad reception of the Road to PS5 talk, there was an internal mandate from Jim Ryan himself and developers also followed suit hence the very slow drip of information
  • CPU is Zen 2 with a unified 8MB L3 cache (confirmed by two developers), there is no option to turn off SMT of the CPU. The unified cache allows significantly lower latency when accessing the cache memory.
  • One CPU core is dedicated to operating system and similar functionality which leaves the rest of the 7 cores for developers
  • The DDR4 chip is for SSD caching and OS tasks, developers will completely ignore this.
  • PS5 does not feature Sampler Feedback Streaming however there are tools which offer similar results/functionality by developers (however it's a little more complicated)
  • PS5's Primitive Shaders are much more advanced than the ones used in RDNA 1 and allow "extreme" levels of precision to the GPU
  • VRS also runs on "extreme" precision with the Geometry Engine
  • Unreal Engine 5's "Nanite" technology was making good use of the PS5's Geometry Engine and has been the best showcase of the GE so far
  • The cache scrubbers have been overlooked and are not getting enough attention, according the the developer they offer a significant speedup of the GPU and reduce a lot of overhead
  • The Tempest Engine is already being used by developers for CPU related tasks and is great at things like physics
  • PS5's compute unit architecture is pretty much the same as that in the desktop implementation of RDNA 2 and the Series X
There's a bunch of other stuff too which I left out which he was re-confirming in this video as well like the PS5 being designed around "raw geometry throughput and data latency". I recommend watching the whole video.

Do we know how much RAM is reserved for the PS5 OS? That still hasn’t been officially confirmed.
 

Neo Blaster

Member
I Don’t understand why Sony didn’t go with Nvidia like 7th generation! It would have been fantastic to see PS5 based on Nvidia’s 2020 technologies! Probably $599 price tag at least but totally worth it.
The same reason no other platform holder goes with them: they charge just too much. Switch is an exception because all those Tegras tanked and Nintendo got them really cheap.
 

Mahavastu

Member
Europe is the Playstation Stronghold 💪🏻
In Europe the PS5 was launched a week after the XSX.
This means you see the bulk of PS5 sold at launch, while the launch XSX are in a different week.

Anyway, with such small numbers (720k units for all europe, a bit more then a million in the US) it is no surpise that the PS5 has such higher demand than supply.
We had this bloomberg or forbes article with 10mio units manufactured until the end of december. Seems it was fake as some of us thought back then...
 

kyliethicc

Member
Series X doesn't use split ram as in the way say PS3 did, Devs see it as one pool this has been confirmed several times, there is no evidence from anyone that any game so far uses more than 10gb of Vram, link if you can find one. Even the slower portion of ram is faster than the Xbox One X ram which ran games at 4k.

That 10gb also has a massive bandwidth advantage over PS5 on top.
Laughably bad take.
 

kyliethicc

Member
ok, i was reading the digital foundry break down with Andrew Goossen and trying to understand what they said

""Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

So what happens if the GPU needs more than 10Gb?
If the GPU needs 10.5 or 11 GB at 560 GB/s, even just for rendering a few frames, it can't get that really. The system is bottlenecked.
 

sircaw

Banned
There are just no Xbox consoles available , sold out here in the UK , they didn’t make anywhere enough consoles

that's what i don't understand i thought they went into console production many months ago, i remember the phil spencer quote back then. What have they been doing, why has sone got so many more consoles available.
 

ethomaz

Banned
that's what i don't understand i thought they went into console production many months ago, i remember the phil spencer quote back then. What have they been doing, why has sone got so many more consoles available.
Remember.

When they said they sold every Xbox One at launch in UK... to realize months and even a year late there is massive quantity of Day One Edition on retailers.
 
Status
Not open for further replies.
Top Bottom