• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*) Ali Salehi, a rendering engineer at Crytek contrasts the next Gen consoles in interview (Up: Tweets/Article removed)

GymWolf

Gold Member
Even Sony admitted early in the gen PS3 Cell was hard to code... specifically to uses its SPUs.

so, sony and who else? and how about M?
do devs ever really talked shit about the hardware?
 

ethomaz

Banned
Dr Keo is an era poster not a dev, and the issue was put to bed by the expert lady Gaia

No they are not individual straws, bus works in parallel. I post it again :


EwW58mE.png
I did not know that.

Anyway you are right the the memory controller is done in parallel accessing all the 16bits channels simultaneous.
 

rnlval

Member
Dr Keo is an era poster not a dev, and the issue was put to bed by the expert lady Gaia

No they are not individual straws, bus works in parallel. I post it again :


EwW58mE.png
"CPU is doing literally nothing?"

Follow CPU cache boundaries programming guide and use fusion links to minimize external memory hit rates for the CPU.

Respect CPU cache boundaries programming guidelines with AMD based game consoles also optimizes gaming PCs.
 
Last edited:

ethomaz

Banned
so, sony and who else? and how about M?
do devs ever really talked shit about the hardware?
Well if Sony (first-party) said that of course 3rd-party devs was even more harsher.

Epic before PS3 launch.
You will find a lot of devs talking.
 
Last edited:

GymWolf

Gold Member
Well if Sony (first-party) said that of course 3rd-party devs was even more harsher.

Epic before PS3 launch.
You will find a lot of devs talking.
do you think they are gonna have the courage to talk shit during next gen?
 
Last edited:

rnlval

Member
Dr Keo is an era poster not a dev, and the issue was put to bed by the expert lady Gaia

No they are not individual straws, bus works in parallel. I post it again :


EwW58mE.png
No, each data element is associated with memory address data, hence data element stream can be located in different memory addresses.

Each GDDR6 16bit channel has it's own address links. I don't disagree with GDDR6 memory access is in parallel.
 
Last edited:

Radical_3d

Member
But does it really matter? It just works, that's what should matter for us, end consumers, it's up to the devs hands to make the best use of it. MS already showed Gears 5 running maxed out on XBX at 4K60, and fully path-traced Minecraft, so obviously there is absolutely no issue/bottleneck as far as this fancy RAM configuration goes.
It doesn't matter. I'm just curious how, according to Tom from "Moore's Law Is Dead", DICE is having the same frame-rates in two machines so different in theoretical power.

OT: How can I stop watching threads every time I reply on them?
 
Last edited:

ZywyPL

Banned
It doesn't matter. I'm just curious how, according to Tom from "Moore's Law Is Dead", DICE is having the same frame-rates in two machines so different in theoretical power.

Probably because the games are locked to 60FPS, and both consoles are way more capable than that. Frostbite is already very effective on current-gen consoles, let alone on 10-12TF systems, and more efficient ones on top of that. The real difference will be in VRR modes with uncapped framerates. OR - XBX will be able to use those extra ~2TF CUs not for graphics, but more/better physics simulations, but that's less likely to happen.
 

geordiemp

Member
No, each data element is associated with memory address data, hence data element stream can be located in different memory addresses.

Each GDDR6 16bit channel has it's own address links. I don't disagree with GDDR6 memory access is in parallel.

Its a 320 wide bus, each memory chip is 14 gbs. For Ps5 its a 256 wide bus, each chip is 14 gbs.

How do you think one can get to 560GBs and the other only 448 accessing same RAM chips ?

Its because the 560 goes wider, they work in parrallel.

Parrallel.

Otherwise if it was individual RAM chips and individual straws, all 14 gbs chips would be the same bandwidth access.
 
Last edited:

Radical_3d

Member
Probably because the games are locked to 60FPS, and both consoles are way more capable than that. Frostbite is already very effective on current-gen consoles, let alone on 10-12TF systems, and more efficient ones on top of that. The real difference will be in VRR modes with uncapped framerates. OR - XBX will be able to use those extra ~2TF CUs not for graphics, but more/better physics simulations, but that's less likely to happen.
It will be interesting if this is the first batch of games of a generation that doesn't blow the systems away performance wise. Third parties have a long tradition of putting more detail in the games than their yet unoptimised codes can handle.
 

ethomaz

Banned
But does it really matter? It just works, that's what should matter for us, end consumers, it's up to the devs hands to make the best use of it. MS already showed Gears 5 running maxed out on XBX at 4K60, and fully path-traced Minecraft, so obviously there is absolutely no issue/bottleneck as far as this fancy RAM configuration goes.
Seems like the Gears 5 60fps showed needed dynamic resolution to maintain 60fps.
But that is expected for a 2 weeks port.
Needs optimizations.
 
Last edited:

geordiemp

Member
I think with production being severely affected by the pandemic it'll be doubtful to see any chip changes from now until launch. We'll see though.

Good post, it depends on costs and deals and binning. If Sony are buying from Samsubg, and their 18 /16 gbs gbs chips are mostly 14 gbs after testing, then Sony will buy those I guess?

If Samsung get 16 gbs mostly, then its in Sony interest to bargain. Same goes for MS, its not a one way street.

#Thats my guess, I have no idea on their relationship and Samsung sucess with GDDR6.
 
Last edited:

rnlval

Member
It doesn't matter. I'm just curious how, according to Tom from "Moore's Law Is Dead", DICE is having the same frame-rates in two machines so different in theoretical power.

OT: How can I stop watching threads every time I reply on them?

index.php



For 4K with Battlefield V, if there's a 60 Hz limit, both consoles have 60 Hz regardless of XSX's extra power.

RX 5700 XT has 9.66 TFLOPS average, hence PS5 BFV version would be slightly faster.


I don't use BattleField V and Forza Motosport 7 examples since it doesn't reflect Unreal Engine 4's results.
 
Last edited:

psorcerer

Banned
The important thing for the discussion I was having is that 1 / 2 / 3 (whatever) channel(s) accessing the higher area doesn't necessarily mean, and shouldn't mean, that every other channel is blocked for the 10GB area.

But it does mean that 6GB of that 10GB is blocked. Because you're accessing the other half of these chips at that time.
Only the smaller 4x1GB chips are idle.
 

Radical_3d

Member
index.php



For 4K with Battlefield V, if there's a 60 Hz limit, both consoles have 60 Hz regardless of XSX's extra power.

RX 5700 XT has 9.66 TFLOPS average, hence PS5 BFV version would be slightly faster.


I don't use BattleField V and Forza Motosport 7 examples since it doesn't reflect Unreal Engine 4's results.
And the SX would be a 2080? I can live with 8 fps less in average.
 

Radical_3d

Member
It will be fun to watch if the first batch of games shows the opposite :messenger_face_screaming:
I've read no dev saying that. Even the guy from this thread (who likes Microsoft as little as I do) isn't saying that. All they are saying is that PS5 is easier to develop for and that can make up for the lack of power. Better case scenario is to match SX, not to surpass it, and I don't think it'll happen. SX will be more stable and cost $100, if anyone want to go there for a 160p difference is respectable.
 

B_Boss

Member
Didn’t DICE say they’re basically “going all in” as a 3rd party, taking full advantage of the consoles? Perhaps that was just PR and not realistic or does the hardware motives the alleged change in development?
 

rnlval

Member
Its a 320 wide bus, each memory chip is 14 gbs. For Ps5 its a 256 wide bus, each chip is 14 gbs.

1. How do you think one can get to 560GBs and the other only 448 accessing same RAM chips ?

Its because the 560 goes wider, they work in parrallel.

Parrallel.

2. Otherwise if it was individual RAM chips and individual straws, all 14 gbs chips would be the same bandwidth access.
My statement doesn't remove memory access parallelism, but

GDDR6 chip has two memory address channels and two 16 bit data channels.

GDDR6 can support full-duplex read/write modes with dual 16bit channels like HBM.

My statement does NOT contradict 336 GB/s limitation when multiple workloads focus on 3.5GB memory address space.

The data elements for frame buffers and textures are usually located next to each other. For X1X, the game's frame buffers should not be located in 3.5GB address space.

You have to factor in GPU's memory page size.
 
Last edited:

rnlval

Member
And the SX would be a 2080? I can live with 8 fps less in average.
Battlefield V's 4K Ultra settings are pretty easy for RX 5700 XT and above GPUs.

XSX's extra power is for threshold issues e.g. extra headroom for sustained 30/60/120 hz and 4K targets.
 

Radical_3d

Member
Well, maybe they are even and evenly priced. After all you can make four PS5 SoC for every three SX SoC, but there are a crap load of those chips that won't be able to run at 2.3GHz, hence more of then will be discarded vs making a chip which has to reach only 1.8GHz. And the cooling solution will be more expensive as well (and the SX already looks expensive as hell).
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
My year 2013 R9-290X and Intel Core i7-2600 obliterated "The Cerny's supercharge PC" aka PS4.

My old R9-290X continues to obliterate The Cerny's PS4 Pro.

Big NAVI comes around October 2020 which is another October 2013 R9-290X re-run. Red October = AMD's flagship arrives.

And how much did that PC cost you?
 

sircaw

Banned
Any summary? I don't watch videos here.

They say

" if you are being really really charitable the xbox could have a 20% advantage( the other guy i think says 10 at the most) but these consoles are so different in design. Developers saying there extremely close in performance, coretek behind the scenes is worried that pc games could be left behind a little bit, due to latency reduction in the ps5 because certain tasks require half the resources. pc stuff costs so much more than what your getting in a console.

The narrator says he has also heard similar things from people he knows behind the scenes. ssd could end up being really ground changing.

Api's are going to be the way they scale performance soon, way of the future.
Loading is completely gone on one of the consoles, half a second on everything.
but says in the end it still needs to be proven.

He is saying people should stop talking about resolution for the next generation, 4k is done, cpus are so powerful, .

I am not as technical as you guys probably missed alot, sorry.
 
Last edited:

ethomaz

Banned
They say

" if you are being really really charitable the xbox could have a 20% advantage but these consoles are so different in design. Developers saying there extremely close in performance, coretek behind the scenes is worried that pc games could be left behind a little bit, due to latency reduction in the ps5 because certain tasks require half the resources. pc stuff costs so much more than what your getting in a console.

The narrator says he has also heard similar things from people he knows behind the scenes. ssd could end up being really ground changing.

Api's are going to be the way they scale performance soon, way of the future.
Loading is completely gone on one of the console, half a second on everything.
but says at the end it still needs to be proven.

He is saying people should stop talking about resolution for the next generation, 4k is done, cpus are so powerful, .

I am not as technical as you guys probably missed alot, sorry.
Thanks... yeap it matches what others devs said.

The SSDs on PS5 if not a game-changer to consumer it is a game-changer in how devs design/develop games... it is a paradigm change.

I'm very excited to the below 1s loading speeds.
That is how I used to play on SNES and N64... after PS1 I started to feel asleep with every loading time... the popularity of Cellphones helped that because now I can play Candy Crush (actually Lords Mobile) between the loading times lol
 
Last edited:

sircaw

Banned
Thanks... yeap it matches what others devs said.

The SSDs on PS5 if not a game-changer to consumer it is a game-changer in how devs design/develop games... it is a paradigm change.

I'm very excited to the below 1s loading speeds.
That is how I used to play on SNES and N64... after PS1 I started to feel asleep with every loading time.

For sure, that ssd with reduced loading times is my primary reason to get the new ps5.
Well that with that new tempest chip and of course the games, very very excited.
Good times are coming.
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
The difference is bigger than 17-20%, I wouldn't go solely off the teraflop count alone, especially since the Series X GPU has more confirmed advanced gpu features that will stretch that performance quite a bit. The PS5 GPU clock speeds are also, again, variable for a reason. Even a 5% drop in frequency makes it a 9 teraflop GPU in the same range as the 5700XT. Some here might think it will never drop by at least 5%, but it will probably drop further, but let's assume it doesn't, and only drops 5%, that's a 9.x teraflop PS5 GPU.

Next, there are those important features like Texture Space Shading, Sampler Feedback, VRS, Mesh Shaders. None of those are confirmed for the PS5.. yet. I assume machine learning is also there due to an old wired piece, but cerny didn't seem to detail any of that in his deep dive either.

As far as the bandwidth argument, I again think that's a pretty pointless exercise because of course the more powerful GPU needs more memory bandwidth to feed it. That's one of the reasons more powerful GPUs outperform less powerful ones. This is why the 2080 Ti couldn't possibly have the same bandwidth as the RTX 2080, and why the slightly faster RTX 2080 Super also needed it's memory bandwidth bumped up. The Xbox Series X has the perfect amount of memory bandwidth for its GPU. I think the gap is bigger than with the ps4 pro and xbox one x due to those series x gpu features. And DirectX 12 engines should become far more common, which means Series X will really be stretching its legs. The original Xbox One, though it supported some, never got to benefit from the full range of DirectX 12 features because it simply didn't support them all.

I mean, I still don't think people comprehend how big a deal some of these confirmed xbox series x features are.




Even mesh shaders, one of the high benefits of those is significantly cutting down on the memory bandwidth cost of using far more complex geometry. So everything we've seen confirmed for the Series X thus far seems tailor made for allowing the available resources to be used for a far better end result than what would have been possible without these features. Developers will have to build them into their engines, but with DirectX 12 Ultimate bringing PC and console much closer together, i can now see that happening.

You TOTALLY don't understand at all why the PS5's APU has variable processors in it. You seem to think the GPU will decrease the frequency by 5% or more for the whole game. What's your problem?
 

SonGoku

Member
I agree although there may be part of each XSX and PS5 respective GPU’s that are part of co-designed customisations which may show on Big Navi in Desktop, but are not in the base RDNA2 spec.
That's reasonable both might have unique customizations/modifications tailored to their design to minimize bottlenecks, but as far as base RDNA2 features mentioned by sejutsu (Texture Space Shading, Sampler Feedback, VRS, Mesh Shaders) its given PS5 will at least support that basic feature set found in every RDNA2 card come launch

I think the confusion stems from rumors that claimed PS5 was RDNA1 based hybrid but that was already debunked with confirmation of RDNA2.
 

ethomaz

Banned
That's reasonable both might have unique customizations/modifications tailored to their design to minimize bottlenecks, but as far as base RDNA2 features mentioned by sejutsu (Texture Space Shading, Sampler Feedback, VRS, Mesh Shaders) its given PS5 will at least support that basic feature set found in every RDNA2 card come launch

I think the confusion stems from rumors that claimed PS5 was RDNA1 based hybrid but that was already debunked with confirmation of RDNA2.
VRS, Sampler Feedback and Mesh Shaders are all done in Geometry Engine.
That is a core part of RDNA 2 and can't be removed.

Texture Space Shading is the nVidia name before MS called it Sampler Feedback on DX API btw.
 

BluRayHiDef

Banned
The difference is bigger than 17-20%, I wouldn't go solely off the teraflop count alone, especially since the Series X GPU has more confirmed advanced gpu features that will stretch that performance quite a bit. The PS5 GPU clock speeds are also, again, variable for a reason. Even a 5% drop in frequency makes it a 9 teraflop GPU in the same range as the 5700XT. Some here might think it will never drop by at least 5%, but it will probably drop further, but let's assume it doesn't, and only drops 5%, that's a 9.x teraflop PS5 GPU.

Next, there are those important features like Texture Space Shading, Sampler Feedback, VRS, Mesh Shaders. None of those are confirmed for the PS5.. yet. I assume machine learning is also there due to an old wired piece, but cerny didn't seem to detail any of that in his deep dive either.

As far as the bandwidth argument, I again think that's a pretty pointless exercise because of course the more powerful GPU needs more memory bandwidth to feed it. That's one of the reasons more powerful GPUs outperform less powerful ones. This is why the 2080 Ti couldn't possibly have the same bandwidth as the RTX 2080, and why the slightly faster RTX 2080 Super also needed it's memory bandwidth bumped up. The Xbox Series X has the perfect amount of memory bandwidth for its GPU. I think the gap is bigger than with the ps4 pro and xbox one x due to those series x gpu features. And DirectX 12 engines should become far more common, which means Series X will really be stretching its legs. The original Xbox One, though it supported some, never got to benefit from the full range of DirectX 12 features because it simply didn't support them all.

I mean, I still don't think people comprehend how big a deal some of these confirmed xbox series x features are.




Even mesh shaders, one of the high benefits of those is significantly cutting down on the memory bandwidth cost of using far more complex geometry. So everything we've seen confirmed for the Series X thus far seems tailor made for allowing the available resources to be used for a far better end result than what would have been possible without these features. Developers will have to build them into their engines, but with DirectX 12 Ultimate bringing PC and console much closer together, i can now see that happening.

PlayStation 5:
GPU: 144 TMUs (texture mapping units)
GPU frequency: 2,230 Mhz
Fill Rate: 144 TMUs x 2,230 x 1000 = 321,120,000 texels per second


Xbox Series X:
GPU: 208 TMUs (texture mapping units)
GPU frequency: 1,825 Mhz
Fill Rate: 208 TMUs x 1,825 x 1000 = 379,600,000 texels per second


-------------
Calculation of Percentage Difference: (321,120,000 texels per second) / (379,600,000 texels per second ) = 0.845943098 -> 100 x 0.845943098 = 84.5943098% =~ 84.6%


The PlayStation 5's fill rate is 84.6% of the Xbox Series X's fill rate, which is negligible since its fill rate is fast enough to render games in 4K at 60 frames per second.


Also, the PS5's higher frequency will enable it to perform rasterization and overall render output (ROPs) more quickly. So, the PS5's GPU is a bit weaker in one regard but more powerful in two other regards.
PlayStation 5:
GPU: 144 TMUs (texture mapping units)
GPU frequency: 2,230 Mhz
Fill Rate: 144 TMUs x 2,230 x 1000 = 321,120,000 texels per second


Xbox Series X:
GPU: 208 TMUs (texture mapping units)
GPU frequency: 1,825 Mhz
Fill Rate: 208 TMUs x 1,825 x 1000 = 379,600,000 texels per second


-------------
Calculation of Percentage Difference: (321,120,000 texels per second) / (379,600,000 texels per second ) = 0.845943098 -> 100 x 0.845943098 = 84.5943098% =~ 84.6%


The PlayStation 5's fill rate is 84.6% of the Xbox Series X's fill rate, which means that it's 15.4% slower, which is negligible since it's still powerful enough to render games in 4K at 60 frames per second.
 

B_Boss

Member
Timestamped.



HYPE!


Amazing section! Megreo did you check out the discussion in the comments between the host and a viewer? It’s quite an interesting exchange. I’ll post a bit here:

Pinned by Moore's Law Is Dead
r s
I think you guys are kind of underestimating the Xbox Series X velocity architecture. If the PS5 only needs to load 1 second ahead of gameplay then the Xbox series X isn't too far behind at 2 seconds. Users won't be able to see the difference. I'm not sold on the PS5 GPU yet on the other hand. Otherwise great show as usual.
1 week ago109

Moore's Law Is Dead
They have already shown State of Decay 2 running on the XSX - it took like ~10 seconds to load, and that's slower than my PC. This isn't up for debate, it's shown:



The PS5 on the other hand can literally load all 16GB of memory in 2 seconds. So at least on paper it shouldn't even be able to take more than 2 seconds, and of course you don't actually need to fill all 16GB in order to load a game (nor will any game use all 16GB due to the OS' needs) .

https://www.theverge.com/2020/3/18/21183181/sony-ps5-playstation-5-specs-details-hardware-processor-8k-ray-tracing

It's not a "slight difference" in load times. It's the difference between still having a loading screen for 10 seconds, or possibly having ZERO load screens at all. Again though, I want to see it in action before I come to any final conclusions about how this will pan out. The point is I am not "underestimating" the XBOX architecture at all. I have said good things about it, but there's nothing left to analyze - it's basically a PC, I know how it will perform (Ray Tracing is still somewhat unknown though). On the other hand the PS5 has much more potential based on what I have seen, and what people who have worked on the hardware have told me. It's still "Potential" though, and that's because Sony hasn't shown a single f***ing game yet...
6 days ago8

r s
@Moore's Law Is Dead Sorry I wasn't clear. I meant that for games developed from the ground up for the PS5 or Series X. As Cerny said and you have reiterated, the PS5 can use almost all the 16 GB of RAM to load only 1 second worth of gameplay. We should expect the Series X to do 2 seconds as it can load data into ram at half the speed. This is much better than the PS4 which needed to load at least 30 seconds of gameplay data into RAM. And we haven't even considered Sampler feedback streaming on the Series X. But really users won't be able to tell the difference between the two machines in terms of loading after the initial load screen. If the PS5 can eliminate load times, the Series X can too or may have only a second or two initially in AAA games... For the video you posted, the game wasn't fully utilizing the SSD as it wasn't using the Velocity architecture. The architecture could be an overhead causing the 8.5 second load time (Although it is still much faster than an HDD) Otherwise It will load up much faster than shown in that video, if not instant. Here is the DF video explaining

https://www.youtube.com/watch?v=qcY4nRHapmE&t=243s .

On the other hand, it is possible that without optimization, the same game would take 4.25 seconds to load on the PS5(twice as fast yet not instant). I say this because Cerny was not clear about whether current gen games would need optimizations to get the full speed of the PS5 ssd. He says here

https://www.youtube.com/watch?v=ph8LyNIT9sg&t=739s

that developers won't need to even think about whether their data is compressed. But he's not clear about current gen games also! I doubt it's the case. If it's not, then the PS5 SSD has an additional advantage of not needing optimizations on any current gen games. But optimizing current gen games to utilize the Velocity architecture should be a very simple task that even an individual on a team should be able to do.

Thank you. Btw I'm a Sony fan but I find the overall Series X system architecture more impressive. I look forward to finding out more about the PS5 though.
6 days ago2

Moore's Law Is Dead
@r s I am glad you touched on "programming" for SSD's. If you look at the Spiderman Demo Sony did a year ago, you can see the PS5 dev kit (slower than the final product) loaded Spiderman in 0.8s instead of 8s. That's a 10x speed-up in a game "not programmed for SSD's." However that SOD2 XBOX demo only went from ~50s to 10s...a 5x increase. Even with inefficient last-gen games, Sony demonstrated <1s load times a year ago on unfinished hardware. This is because Sony put extra effort into making sure the extra speed in the SSD would be more effectively utilized, and it doesn't require extra programming from developers. Of course devs can "program to the metal" if they want to put in that extra effort (and many will), but Sony wasn't naive - they knew it had to "just work" for devs to really use it.

https://www.tomshardware.com/news/sony-demos-playstation-5-storage-spider-man,39395.html

Yes, there is of course no doubt game engines in general have neglected to effectively utilize SSD's, and they of course WILL better utilize SSD's once they are built for the next gen consoles. However there are some rather large bottlenecks in SSD's themselves that also prevent games from scaling loading times linearly with GB/s. Sony set out to eliminate these bottlenecks from the inception of the design of the console (as I have covered on my channel from sources for over a year now), and XBOX really didn't. Even when games are built for these consoles you can still expect to see loading screens on the XBOX for 5-10 seconds, and if the PS5 can load Spiderman in 0.8s...I think it is realistic to expect modern games to literally boot entire sections of games from the OS screen in the blink of an eye (Or technically faster than you can blink lol).

Sony didn't just add a decompression block - it has a 12-channel memory controller, 2 hosts for directing data (not one), and a much more advanced tiered loading system than what is in the XBOX. I am not underestimating anything about the "velocity architecture" - I am telling you the fact that Sony put a substantial amount of R&D into how to better utilize faster storage too. Conversely, the XSX was planning on using a 1.2-1.8GB/s drive until they heard how much effort Sony was putting into their own SSD. Both consoles have decompression blocks now, but that's about all you can say.

So I must re-iterate yet again: It is a fact that the PS5's SSD and its supporting I/O controller are leagues ahead of what is in the XBOX. All we can do now is wait for results on how much better, but it isn't going to be "negligible" based on all available information. It is not remotely accurate to compare the XBOX SSD to the PS5's. XBOX is settling for a midrange current gen storage solution, and Sony decided to attempt to entirely skip a generation of storage and go "Next-Next-Gen." That's why the XBOX requires little analysis moving forward: I am confident it will perform like a 2080 Ti + 3700X gaming PC. On the other hand I see the PS5 as potentially performing like something we have never seen before. Again..."Potentially," we will see.

Have you watched Coretek's latest video? He explains more of the actual tech behind what I am talking about.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Even Sony admitted early in the gen PS3 Cell was hard to code... specifically to uses its SPUs.


Krazy Ken actually WANTED the PS3 to be hard to develop for. He wanted it that way specially so that it'll take years for devs to show it's full potential. He absolutely didn't want the PS3 to be easily exploited by devs in the first couple of years. He had a weird sense of pride that the PS3 was hard to develop for.

He had the mindset of an engineer. Cerny has the mindset of a developer/programmer. That's the main difference between how the PS3 was created and how the PS4 and PS5 has been created.


They say

" if you are being really really charitable the xbox could have a 20% advantage( the other guy i think says 10 at the most) but these consoles are so different in design. Developers saying there extremely close in performance, coretek behind the scenes is worried that pc games could be left behind a little bit, due to latency reduction in the ps5 because certain tasks require half the resources. pc stuff costs so much more than what your getting in a console.

The narrator says he has also heard similar things from people he knows behind the scenes. ssd could end up being really ground changing.

Api's are going to be the way they scale performance soon, way of the future.
Loading is completely gone on one of the consoles, half a second on everything.
but says in the end it still needs to be proven.

He is saying people should stop talking about resolution for the next generation, 4k is done, cpus are so powerful, .

I am not as technical as you guys probably missed alot, sorry.

Man this is so exciting!
 
Last edited:

rnlval

Member
VRS, Sampler Feedback and Mesh Shaders are all done in Geometry Engine.
That is a core part of RDNA 2 and can't be removed.

Texture Space Shading is the nVidia name before MS called it Sampler Feedback on DX API btw.
That's not correct.

1. VRS refers to variable shader resolution. Still needs shading power with different resolution.
2. Sampling Feedback is already shaded areas being recycled for the next frame i.e. don't shade again on already shaded areas.

----
Mesh Shaders refers to Geometry Engine.
 
Last edited:
Sony didn't just add a decompression block - it has a 12-channel memory controller, 2 hosts for directing data (not one), and a much more advanced tiered loading system than what is in the XBOX.

It is a fact that the PS5's SSD and its supporting I/O controller are leagues ahead of what is in the XBOX.

I've been saying this all this time. Cerny put about 6 custom chips in the APU to remove all possible bottlenecks for the SSD usage. Those are 6 hardware blocks. Meanwhile, MS has only a decompression block that is not even half as fast as the one in the PS5.

It's not just about the raw speed. It's the over-all I/O throughput.
 

ethomaz

Banned
That's not correct.

1. VRS refers to variable shader resolution.
2. Sampler Feedback is already shaded areas being recycled for the next frame i.e. don't shade again on already shaded areas.

----
Mesh Shaders refers to Geometry Engine.
?

VRS = Variable Rate Shading.
Variable Rate Shading in done at Geometry Engine time before you draw.

activision.jpg


Sample Feedback (it is a software logic not hardware feature) has two uses only.
Streaming.
Texture Space Shading.

So I don't think we are talking about texture streaming (that is a non issue with faster SSDs) here so it is basically used for Texture Space Shading.
Anyway both Streaming Texture and Texture Space Shading can be done without Sample Feedback logic.

Mesh Shaders are Primitive Shaders on AMD side.
It is done by the Geometry Engine.
 
Last edited:

rnlval

Member
Krazy Ken actually WANTED the PS3 to be hard to develop for. He wanted it that way specially so that it'll take years for devs to show it's full potential. He absolutely didn't want the PS3 to be easily exploited by devs in the first couple of years. He had a weird sense of pride that the PS3 was hard to develop for.

He had the mindset of an engineer. Cerny has the mindset of a developer/programmer. That's the main difference between how the PS3 was created and how the PS4 and PS5 has been created.
Note that a developer/programmer aka software engineer. Cerny is a software engineer.

Krazy Ken's background is with electronics degree. Krazy Ken is an electronics engineer.
 

SonGoku

Member
Texture Space Shading is the nVidia name before MS called it Sampler Feedback on DX API btw.
After reading MS paper it seems Sampler Feedback is a software layer that improves TSS performance and makes it simpler for devs by identifying texture sampling info and locations.
 

ethomaz

Banned
After reading MS paper it seems Sampler Feedback is a software layer that improves TSS performance and makes it simpler for devs by identifying texture sampling info and locations.
Yes.

It basic a software logic to help Streaming Texture and TSS.
There is no other use for it.
 
Last edited:

rnlval

Member
?

VRS = Variable Shader Rate.

Sample Feedback has two uses only.
Streaming.
Texture Space Shading.

So I don't think we are talking about texture streaming (that is a non issue with faster SSDs) here so it is basically used for Texture Space Shading.
Mesh Shader = geometry processing improvement.

Variable Shader Rate = shading with different resolutions. Needs shader power with a new feature.

Sample Feedback
1. Texture-Space Shading, Turn the shaded area into a texture and recycled it into the next frame. Apply shaded texture into geometry instead of texture+shader into geometry. Needs texture mapper and save shader resource.
2 Texture Streaming, evolved Partially Tiled Resource with Texture-Space Shading (recycle shaded area as texture data) .

Your claim "VRS, Sampler Feedback and Mesh Shaders are all done in Geometry Engine" is not correct.
 

ethomaz

Banned
Mesh Shader = geometry processing improvement.

Variable Shader Rate = shading with different resolutions. Needs shader power with a new feature.

Sample Feedback
1. Texture-Space Shading, Turn the shaded area into a texture and recycled it into the next frame. Apply shaded texture into geometry instead of texture+shader into geometry. Needs texture mapper and save shader resource.
2 Texture Streaming, evolved Partially Tiled Resource with Texture-Space Shading (recycle shaded area as texture data) .

Your claim "VRS, Sampler Feedback and Mesh Shaders are all done in Geometry Engine" is not correct.
Again.

VRS = Variable Rate Shading.
Variable Rate Shading in done at Geometry Engine time before you draw.

activision.jpg


Sample Feedback (it is a software logic not hardware feature) has two uses only.
Streaming.
Texture Space Shading.

So I don't think we are talking about texture streaming (that is a non issue with faster SSDs) here so it is basically used for Texture Space Shading.
Anyway both Streaming Texture and Texture Space Shading can be done without Sample Feedback logic.

Mesh Shaders are Primitive Shaders on AMD side.
It is done by the Geometry Engine.

All these features are related/done by Geometry Engine on AMD hardware.
Geometry Engine is not a single hardware unit... there is a lot of small units that do these works before the draw... for example the Geometry Engine can process 4 Primitives Shaders (aka Mesh Shaders) per cycle.

See the RDNA Geometry Processor below (AMD calls it Geometry Engine in another slide so it is the same).

3.jpg
 
Last edited:
Top Bottom