• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

There is a non-consumer SSD that is faster than the PS5 SSD

Elog

Member
Timestamp? Where are you getting this observation from?

Apologies for the late response - work came inbetween.

Let's go through the two sections that are most relevant.

1) First Linus runs ATTO (best timestamps at 7:28 and 7:37) - a synthetic Windows benchmark tool. Looking at the graph you see the file size tested and the read/write speeds you get in theory. As you can see the R/W throughput increases with file size since latency is dominant, i.e. the larger the file size the faster the R/W. Please also note that the ATTO uses L1 and L2 cache to not be slowed down by the CPU/RAM, i.e. it is theoretical. You can see this in the 7.28 timestamp where RAM is at just 4% and the CPU is at 3%.

Depending on the file size the bandwidth range is from roughly 40MB/sec to 20GB/sec with a latency/response time of 4,000ms.

2) Then Linus runs 16 files at 4K in parallel (best timestamp at 8:11 to 8:14). Please note that 16 huge files in windows in a best best case scenario since for example textures come in 1000's of files that push down performance similarly as with ATTO above. Here you see that when reading these files into RAM it utilises 32 cores at 100% (roughly 50% of the 3990x thread ripper) so the CPU power required is immense. Secondly, you see that the read throughout is at 1.5 GB/sec at 50% disk utilisation. Is there another bottleneck? Or is the 'real' throughout 3 GB/sec in this best case Windows example? I do not know. The response time is good (not surprising with only 16 files) at around 1ms.

Net-net: Using 1000's of files under Windows would most likely give you around 1.5GB/sec and latencies/response times around 1000ms and would required 32 CPU cores+ to pull off. This is below what was shown with the PS5 (according to Epic). And we have not touched the VRAM transfer and driver overhead to these numbers to make it comparable to the Epic showcase.

I still think the Linus video is impressive but if the PS5 delivers what Epic and Cerny claims it is in a different league. See TS response below:



 

jigglet

Banned
But surely there weren't many people here that expected Sony to hold onto this lead for long, right? It was always going to be overtaken relatively quickly on PC.
 

Knch

Member
2) Then Linus runs 16 files at 4K in parallel (best timestamp at 8:11 to 8:14). Please note that 16 huge files in windows in a best best case scenario since for example textures come in 1000's of files that push down performance similarly as with ATTO above. Here you see that when reading these files into RAM it utilises 32 cores at 100% (roughly 50% of the 3990x thread ripper) so the CPU power required is immense. Secondly, you see that the read throughout is at 1.5 GB/sec at 50% disk utilisation. Is there another bottleneck? Or is the 'real' throughout 3 GB/sec in this best case Windows example? I do not know. The response time is good (not surprising with only 16 files) at around 1ms.

Yes, those very accurate Windows disk transfer measurements are known all around the world for their impeccable accuracy. /s
Those CPU measurements are that high because of decoding 16 4K streams in software.
 

Elog

Member
Yes, those very accurate Windows disk transfer measurements are known all around the world for their impeccable accuracy. /s
Those CPU measurements are that high because of decoding 16 4K streams in software.

You mean like decompressing 1000's of textures? :)
 

geordiemp

Member
Apologies for the late response - work came inbetween.

Let's go through the two sections that are most relevant.

1) First Linus runs ATTO (best timestamps at 7:28 and 7:37) - a synthetic Windows benchmark tool. Looking at the graph you see the file size tested and the read/write speeds you get in theory. As you can see the R/W throughput increases with file size since latency is dominant, i.e. the larger the file size the faster the R/W. Please also note that the ATTO uses L1 and L2 cache to not be slowed down by the CPU/RAM, i.e. it is theoretical. You can see this in the 7.28 timestamp where RAM is at just 4% and the CPU is at 3%.

Depending on the file size the bandwidth range is from roughly 40MB/sec to 20GB/sec with a latency/response time of 4,000ms.

2) Then Linus runs 16 files at 4K in parallel (best timestamp at 8:11 to 8:14). Please note that 16 huge files in windows in a best best case scenario since for example textures come in 1000's of files that push down performance similarly as with ATTO above. Here you see that when reading these files into RAM it utilises 32 cores at 100% (roughly 50% of the 3990x thread ripper) so the CPU power required is immense. Secondly, you see that the read throughout is at 1.5 GB/sec at 50% disk utilisation. Is there another bottleneck? Or is the 'real' throughout 3 GB/sec in this best case Windows example? I do not know. The response time is good (not surprising with only 16 files) at around 1ms.

Net-net: Using 1000's of files under Windows would most likely give you around 1.5GB/sec and latencies/response times around 1000ms and would required 32 CPU cores+ to pull off. This is below what was shown with the PS5 (according to Epic). And we have not touched the VRAM transfer and driver overhead to these numbers to make it comparable to the Epic showcase.

I still think the Linus video is impressive but if the PS5 delivers what Epic and Cerny claims it is in a different league. See TS response below:





Yup, the key words of sweeny are CPU and driver absraction overhead, and he clearly says ps5 is superior.

Abstraction overhead is a sly dig at DX12 apis ...

Posters thinking PCs can brute ps5 efficiency are in for a rude awakening.
 
Last edited:

Knch

Member
You mean like decompressing 1000's of textures? :)
Why would you decompress them and not store them in a format the GPU can actually use? And decompressing images is far less intensive than decoding an h265 stream (or whatever format they're actually using)
 

Elog

Member
Why would you decompress them and not store them in a format the GPU can actually use? And decompressing images is far less intensive than decoding an h265 stream (or whatever format they're actually using)

You are right. It still leaves you with read throughout around 1-2 GB/sec though and latencies around 1 second. And that is before VRAM transfer and GPU driver overhead.. Ergo: The PC architecture does not keep up even with the best available hardware which is exactly TS's point.
 

Rikkori

Member
It's gonna be funny when, even 2 years in, you'll still not max out even a crucial mx 500 when gaming. People who cling to cUstOm hArDwArE as a solution have completely missed the point - it was always down to the software, dummy. We've had faster ramdisks (incl lower latency!) than all the PS5 custom hardware put together more than 13 years ago.

Then again, I guess when you're stuck at 1440p 30 fps you gotta cling onto whatever you can. :messenger_tears_of_joy:
 

On Demand

Banned
It’s not just the speed. It’s everything built around it and the way it’s customized specifically for the console.

No bottlenecks either as most PC drives don’t even hit their full speed.

PC fanboys couldn’t wait to try and say “ha! PC already have something better!” It really doesn’t.

By the way, PS5 SSD can reach 22GB with its kraken compression.
 

Knch

Member
You are right. It still leaves you with read throughout around 1-2 GB/sec though and latencies around 1 second. And that is before VRAM transfer and GPU driver overhead.. Ergo: The PC architecture does not keep up even with the best available hardware which is exactly TS's point.
Real-world latency for old SATA SSDs when reading random data is ~0,011 ms, but I'm sure the newer SSDs are somehow worse...
 

ToadMan

Member
Take RDR2 as an example. XB1X running at 4K, but on ps4 pro rdr2 looks blurry as all f. I believe it's 2560x1440 on pro, if that. And now we have same gap between ps5 and xbsx. SSD speed is nowhere near as important as GPU performance for gaming. Who cares if you can load a game 1 second faster on ps5 compared to XSX.

Gap ps to xb1 - 30%tflops
Gap xb1 x to ps4 pro - 30%tflops

Gap xsex to ps5 - 15%tflops.

You're not gonna see any difference... in mutliplats. DF analysis might show reduced fidelity on xsex with UE5 games but nothing anyone will notice in practice.
 
Last edited:

ToadMan

Member
Oh I liked some of ps4 exclusives and couldn't care less which sells more. As long as theres many more great games coming next gen not just remasters and sequels/prequels I'll get ps5, but it's sad to see people blindly believe pr fud.

MS dont have any 1st party next gen games...

In fact I can't remember the last game MS actually put out... well that minecraft indie game I suppose lol.
 

Frederic

Banned
Gap ps to xb1 - 30%tflops
Gap xb1 x to ps4 pro - 30%tflops

Gap xsex to ps5 - 15%tflops.

You're not gonna see any difference... in mutliplats. DF analysis might show reduced fidelity on xsex with UE5 games but nothing anyone will notice in practice.

wtf? math is hard right? It’s not 15%. It’s at least 18%.

You used the MAX amount of TFs for PS5, but we all know that it’s variable, they are using a BOOST mode, we don't know what the lowest frequency will be, but using the MAX value as "baseline" is just wrong.

But there are also more things, The XSX has a much higher memory Bandwidth, just compare the factual numbers:

PS5 XBSX

Cores:2304 Cores:3328
TMUs:144 TMUs:208
Rops:64 Rops:80
Bus:256 Bit Bus:320 Bit

That's a substantial difference between the two platforms for those trying to push that the only difference is 18% and limited to CU count.

No wonder developers are coming out in droves stating it's staggering how much more performance the XBSX gives.

43% more cores.
44% more TMUs
25% more tops
25% more Mem Bus.

And the difference is well over 18%. Digital Foundry showed that 10 TF from 36 compute units leads to less performance than 10 TF from 40 compute units. Xbox has 12 TF from 52 compute units. The difference is up to 40%.

And what about ray tracing? XSX has 13TFLOPS alone for dedicated ray tracing, which makes it a 25TFLOPS Machine. PS5 has way less CU.

MS dont have any 1st party next gen games...

In fact I can't remember the last game MS actually put out... well that minecraft indie game I suppose lol.

this is just console warring bullshit. there are 15 xbox studios working on nextgen games!
 

ToadMan

Member
We got some new pc for development here, 3k each one. Max speed 1.5 gbs... lol

That's PC for you - take a general purpose machine and throw money at it until it can just about perform to the spec of a specialist machine costing 10% of the price.

Honestly, to me when people talk about PC stuff vs consoles it's just hand wavium magic. Oh yeah a PC can do "this" - yeah it can at much greater expense and 99.9% of PCs don't because 99.9% of PC owners aren't as stupid as the one making the comment.
 

Frederic

Banned
Really? Be kind and educate me by showing your working...

PS5 is 10.3 TFLOPS max and XSX is 12.1 TFLOPs, that's an 18% difference. Again, for PS5 this is the highest maximum possible, we don't know yet how much down it can get.
Also, Digital Foundry showed that 10 TF from 36 compute units leads to less performance than 10 TF from 40 compute units. Xbox has 12 TF from 52 compute units.
Thats because of AMD CPUs really don't work that well with high frequencies.


No.... they're working on cross gen multiplat games... big difference.

no, only games releasing until october 2021 are crossgen, everything after that - including hell blade 2 - is nextgen only.
There are also two games announced to be coming at launch and are coming to XSX, but not to Xbox One.

But please tell me, what PS5 first party game has been announced for the launch of PS5. Just tell me a single game please.

That's PC for you - take a general purpose machine and throw money at it until it can just about perform to the spec of a specialist machine costing 10% of the price.

Honestly, to me when people talk about PC stuff vs consoles it's just hand wavium magic. Oh yeah a PC can do "this" - yeah it can at much greater expense and 99.9% of PCs don't because 99.9% of PC owners aren't as stupid as the one making the comment.

that's not the point. Tim said that you can't buy anything right now that can match it, which is bullshit of course. I mean, you can't even buy the PS5 right now lol we don't even know when we can buy the PS5.

Posters thinking PCs can brute ps5 efficiency are in for a rude awakening.

why would any third party developer built and optimize their engines, their games based on a single I/O solution? Why would they use assets and rendering techniques so that a single platform would work with it great, but all other platforms would be gimped? Did any multiplatform dev ever do that? Did they use Cell at its fullest?
 

Elog

Member
Real-world latency for old SATA SSDs when reading random data is ~0,011 ms, but I'm sure the newer SSDs are somehow worse...

You have to separate latency as in SSD drive responsiveness and system latency for the SSD -> readable information in RAM (normally called response time in windows - not totally accurate but roughly so). System latency is the problem on the PC platform.

As Carmack pointed out in a tweet you can actually get this piece of the PC architecture much better by driving the SSD to RAM loop bypassing components of the kernel. There is no way to come around this to get the information into VRAM though - the CPU-GPU driver overhead will completely dominate system latency and throughput. That is what JC meant with his tweet below:

 

Airbus Jr

Banned
Just a little reminder OP ( LadyBernstakel) was banned from Era because of console warring

So i was thinking she was repeating what she used to do there in here
 
Last edited:

Frederic

Banned
Apparently, yes. Those who know math, know that either value can be calculated.


he is also using the maximum PS5 tflops 10.28, which again, is bullshit. We do not know yet, how much it can go down and how often it goes down, so using the maximum possible as a "baseline" is just wrong and doesn't make any sense.
Also, he confirms that the XSX is 18% more powerful than the PS5, and you have to take into account that PS5 is using variable frequencies:

Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core.

Source: https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-the-mark-cerny-tech-deep-dive

In addition, he misses a huge point: RDNA 2.0 is much much more efficient, than GCN, i.e.:

You can still do a lot more work with 2 TFlops of RDNA 2.0 than you can with 500GF's of GCN. This is a much bigger difference now, since you can do way more with it now.

I guess, we will see, when the games arrive.
 

ToadMan

Member
PS5 is 10.3 TFLOPS max and XSX is 12.1 TFLOPs, that's an 18% difference.

I wanted you to show me your calculation - and the same one for ps4 xb1 and ps4 pro xb1x please. You're a maths genius - you can drop those here super fast I'm sure.

no, only games releasing until october 2021 are crossgen, everything after that - including hell blade 2 - is nextgen only.

Fine see you in 2022 or later when MS start their next gen. We'll be a 3rd into PS5 next gen by then. In the meantime enjoy playing the xb1 games no one bought before.
 
Last edited:

geordiemp

Member
PS5 is 10.3 TFLOPS max and XSX is 12.1 TFLOPs, that's an 18% difference. Again, for PS5 this is the highest maximum possible, we don't know yet how much down it can get.
Also, Digital Foundry showed that 10 TF from 36 compute units leads to less performance than 10 TF from 40 compute units. Xbox has 12 TF from 52 compute units.
Thats because of AMD CPUs really don't work that well with high frequencies.

no, only games releasing until october 2021 are crossgen, everything after that - including hell blade 2 - is nextgen only.
There are also two games announced to be coming at launch and are coming to XSX, but not to Xbox One.

But please tell me, what PS5 first party game has been announced for the launch of PS5. Just tell me a single game please.



that's not the point. Tim said that you can't buy anything right now that can match it, which is bullshit of course. I mean, you can't even buy the PS5 right now lol we don't even know when we can buy the PS5.



why would any third party developer built and optimize their engines, their games based on a single I/O solution? Why would they use assets and rendering techniques so that a single platform would work with it great, but all other platforms would be gimped? Did any multiplatform dev ever do that? Did they use Cell at its fullest?

15 % of, 18 % added depends which way... just for the compute, not feeding in to the equation other things such as faster cache, rasterisation etc.

Digital foundry did show that yes for PC parts that are not RNDA2 so its worthless in this discssion - go eductate yourself. Pentium 4s go to 3.6 Ghz, so what ?

GCN does not work at high frequencies correct, and what is your point exactly ? Enlighten us, give me a laugh.

What you dont know, and AMD / SONY and MS do, is what frequency RDNA2 is performant to, so stop talking crap. There was a slide on perf / watt from AMD and Cerny stated 2.23 Ghz was the performance cut off for RDNA2. But of course do you know better . no, so just staaaap.

Yes Tim sweeny is wrong, he said Ps5 had better performance for IO. and the learned design engineer and billionaire Frederic is correct,,,,,,where do we find these people ?



Are you timdog ? I noticed you are a new XSX troll account just joined spewing the usual rubbish
 
Last edited:

Knch

Member
You have to separate latency as in SSD drive responsiveness and system latency for the SSD -> readable information in RAM (normally called response time in windows - not totally accurate but roughly so). System latency is the problem on the PC platform.

As Carmack pointed out in a tweet you can actually get this piece of the PC architecture much better by driving the SSD to RAM loop bypassing components of the kernel. There is no way to come around this to get the information into VRAM though - the CPU-GPU driver overhead will completely dominate system latency and throughput. That is what JC meant with his tweet below:


You have to provide realistic numbers not pulled out of thin air or liberally inferred from a random screenshot from a random test. Because everything past the SSD is an order of magnitude faster and there aren't 100 000 steps in between.

You can still be right about having a better SSD in your PS5 without resorting to extreme arsepulling.
 

FranXico

Member
he is also using the maximum PS5 tflops 10.28, which again, is bullshit.
The teraflops numbers given for either console are theoretical maxima. The theoretical maximum TF the PS5 GPU can push at maximum frequency are as real as the theoretical maximum TF the XSX can push. Both require full CU load, and neither will hardly ever compute so many operations at a sustained rate.

We do not know yet, how much it can go down and how often it goes down, so using the maximum possible as a "baseline" is just wrong and doesn't make any sense.
So, you suggest to compute the theoretical maximum using the lowest frequency? Are you even aware how ridiculous that sounds?

Also, he confirms that the XSX is 18% more powerful than the PS5, and you have to take into account that ...
Yes, nobody said otherwise. He even patiently and kindly explains that one can both say that the XSX is 18% stronger or that the PS5 is 15% weaker. Both statements are true. That's how math works.
And stop repeating yourself with the variable frequency, it doesn't make a theoretical maximum any more fictitious than it already is.

You can still do a lot more work with 2 TFlops of RDNA 2.0 than you can with 500GF's of GCN. This is a much bigger difference now, since you can do way more with it now.
Yes, just like you can do a lot more with 10TF RDNA2 than 10TF GCN. That's why people refer to differences in relative terms.

I guess, we will see, when the games arrive.
Only sentence I fully agree with.
 
Last edited:

Elog

Member
You have to provide realistic numbers not pulled out of thin air or liberally inferred from a random screenshot from a random test. Because everything past the SSD is an order of magnitude faster and there aren't 100 000 steps in between.

You can still be right about having a better SSD in your PS5 without resorting to extreme arsepulling.

I will end our dialogue here. You are basically saying that TS and JC are wrong - two of the best developers in the industry. And that hard data using state of the are hardware in arsepulling. You need to separate the theoretical bandwidth and latency of SSDs with what you get from the system once the CPU and RAM, at a minimum, and most often VRAM and driver overhead is added - i.e. the use case for graphics. This is when the PC platform bottlenecks hard.
 

Knch

Member
I will end our dialogue here. You are basically saying that TS and JC are wrong - two of the best developers in the industry. And that hard data using state of the are hardware in arsepulling. You need to separate the theoretical bandwidth and latency of SSDs with what you get from the system once the CPU and RAM, at a minimum, and most often VRAM and driver overhead is added - i.e. the use case for graphics. This is when the PC platform bottlenecks hard.
Looking at task manager on a single youtube video doing something totally unrelated and using that as "hard" data of the maximum capabilities of something (especially the % disk usage) is indeed arsepulling.

I never addressed JC's claims, because he isn't wrong.
I did address your claims because it's based on fudge all and dead wrong.

If everything past the SSD is an order of magnitude faster, how many steps and how much overhead does there need to be to go from a real-world (not fucking theoretical, actually tested and proven more than once) latency of 0.011ms to a whopping 1s?!
 

Dontero

Banned
I think you can make the game run with an HDD, but the experience is extremely negatively impacted.

Nope once games will be made with SSD in mind playing on HDD will be almost impossible.
Some games are already like that. Path of Exile is complete jank on HDD and playing Star Citizen on HDD is true horror.

Frequent multisecond stutters to 0FPS is norm.

By the way, PS5 SSD can reach 22GB with its kraken compression.

You mean with compression taken out of PC space ?
 
Last edited:
It's gonna be funny when, even 2 years in, you'll still not max out even a crucial mx 500 when gaming. People who cling to cUstOm hArDwArE as a solution have completely missed the point - it was always down to the software, dummy. We've had faster ramdisks (incl lower latency!) than all the PS5 custom hardware put together more than 13 years ago.
Ramdisk loading for games has been compared to sata ssd loading, it is no faster. Ps5 is significantly faster than sata ssd loading in practice so it is significantly faster than ramdisk loading.

edit: Ramdisk should be even faster than this raid setup, yet it too is slower than ps5.

The applications would have to be rewritten from the ground up to run from ram without traditional install for it to actually compete with ps5. That won't happen, and if lights go out or computer crashes you'll have to take like 10+minutes loading data into ram again. And that's per game, imagine having to set aside 100-200+GB of ram per game.
 
Last edited:

KingT731

Member
PS5 is 10.3 TFLOPS max and XSX is 12.1 TFLOPs, that's an 18% difference. Again, for PS5 this is the highest maximum possible, we don't know yet how much down it can get.
Also, Digital Foundry showed that 10 TF from 36 compute units leads to less performance than 10 TF from 40 compute units. Xbox has 12 TF from 52 compute units.
Thats because of AMD CPUs really don't work that well with high frequencies.
Oh you mean when they tried to extrapolate info on an architecture that's not available?
 

LordOfChaos

Member
A hotly defended Linus was being facetious, apparently. Looking forward to seeing the interview with a Sony engineer when they're not memeing about specs.

 

SatansReverence

Hipster Princess
Just a little reminder OP ( LadyBernstakel) was banned from Era because of console warring

So i was thinking she was repeating what she used to do there in here

qYx6hTJ.png


Yes, resetera, the epitome of meaningful and reasonable bans.
 
The teraflops numbers given for either console are theoretical maxima. The theoretical maximum TF the PS5 GPU can push at maximum frequency are as real as the theoretical maximum TF the XSX can push. Both require full CU load, and neither will hardly ever compute so many operations at a sustained rate.

Yes, nobody said otherwise. He even patiently and kindly explains that one can both say that the XSX is 18% stronger or that the PS5 is 15% weaker. Both statements are true. That's how math works.

This is true. And when you talk about CPUs and Max performance in CALCULATIONs- and this has to do with ANY computer based calculation
going by my experience in the industry- if you are comparing two numbers you calculate with the higher number as a theoretical maximum.

You would say that 12gb of ram can hold 66.6~% of what 16gb can hold. Because you're talking about a cap.
When you look at your CPU utilization, you look at it as a matter of percentage of 100 percent. You dont
say "The load can go 50% higher than it is now!" You say "It is at 66 percent" and if you were to draw
a graph you always make the top of the graph the highest number if youre talking about a benchmark.
You could EXPRESS it by saying "X percent more performance" but if you had say 5 different GPUs in order
you would take the highest number and you would place the others at a PERCENTAGE of that performance
you would not rate the lowest ones and then express the higher number as a percent improvement over
the lower.

If 12.15 TP is the 100% top tier in this comparison, the PS5 is around 15 percent lower, as it is 84.SOMETHING percent
of the 12.15.

Thats how performance for provisioning, partitioning, Lane utilization on a bus, and anything else is calculated
as a percentage of a whole. The 18 percent can be used for marketing by microsoft when they talk about improvement
but if you're talking about the XBOX series X and you lowered its GPU floating point operation performance by 15 percent
of its current value, it would be a PS5.

They are both correct, but one sounds nicer to the Xbox fans so let them have it.

Either one places the difference between the machines at about half of what it is between the PS4 Pro and XBox one X:

In fact its WORSE for them if they insist on calculating by an "Percentage increase to meet" metric....

6TF of the xbox one X is Almost 143 percent of the 4.2TF of the PS4 PRO. An increase of 43 percent.
Even if you go with 18 percent using the same calculation the difference is less than half (Half being 21.5)
at 18 percent.

If you use the metric I present it shows as 4.2 being 70% and so the difference between them is roughly 30 percent
or, approximately half of the percent calculated by the standard method in computing..

If they insist on 18 percent and use that calculation- the gap between the Series X and the PS5 is NOTABLY less than half
of the gap between the current top consoles. At least I was giving them the benefit by saying it was half down the middle( which it is)

Edit: Just as another note / example regarding "percentage of" calculations in computing this applies to any memory or storage.

When you calculate overhead, you calculate to a maximum and the provisioning is done within it.
For instance you set your EMC array to call home when a LUN reached a percentage of maximum.
If you say "I have 10 percent left" it doesnt mean that you have 10 percent of the 90 percent left at 9 percent, it means
10 percent of the total... This is neither here nor there but again, this would even apply to maximum load.
Because of this you could/ would say that PS5 handles 85 percent of the LOAD the xbox seriex X does.
I think that makes sense to express? it can do 85 percent of the floating point work the XBOX series X can do per second.
 
Last edited:

MoreJRPG

Suffers from extreme PDS
I don't consider my current SSD to be any sort of issue with gaming. It's not like the Sega CD days when waiting for things to load.

I have a MX500 and it’s annoying because it loads screens before I have time finish reading the tooltips. I vaguely remember A Plague Tale loaded before it would even show the first sentence. It’s hilarious people acting like non-nvme SSD are garbage lol.
 
I have a MX500 and it’s annoying because it loads screens before I have time finish reading the tooltips. I vaguely remember A Plague Tale loaded before it would even show the first sentence. It’s hilarious people acting like non-nvme SSD are garbage lol.

you're talking about loading a game that was designed to load well from a platter disk. Nobody is saying those SATA SSD's are trash. Its merely a comparison
of technology. For instance an a 5600 XT is not trash but if the new machines only did 7.2 teraflops (which is no slouch its more powerful than any current console)
then you compared it to systems doing more than 10TF... Its the same with storage. Yes a SATA SSD is going to beat the crap out of a platter disk but- we're talking
about the new systems which are on the order of GIGS per second directly into memory - a game designed to load in 5 seconds from a disk pulling under 100mb/sec
is going to load in a flash at 300-400mbps... but if a game is DESIGNED around the idea that you can move 5 gigs into memory in the time it takes to blink you're going
to be able to do new things.
 

scalman

Member
doesnt matter . wasnt reading anything there but it doesnt matter that you have better thing , it wont be used on PC games , and on PS5 it will be used fully . learn difference. nothing exist that will be used in real games better then PS5 SSD .
 
Last edited:

Dargor

Member
I feel that this thread is a great lesson for people that are always trying to make hot takes.

OP posted a video that even the dude who made it, after getting called out over it, is now claiming he was joking.

Come on people, lets be smart, the dude is looking like a fool, OP is looking like a fool and everyone is the poorer for it cuz they wasted time discussing over a "joke" video.
 
Last edited:

scalman

Member
pc's where allways behing consoles if you compare specs of those , console could evolve on same specs where on pc you allways need to upgrade your damns specs to go anywhere .
now pc's wont be even on same level as next gen consoles.
 
Last edited:

The_Mike

I cry about SonyGaf from my chair in Redmond, WA
Linus uploaded a video not long ago where he whined, cried for help, wanting to step down, he had enough of Youtube .... and here is now shilling for intel like a good boy, promoting "platform wars again", etc.

I guess, every Youtube is a sl#t to some degree. One who´s into clicks&views!

Don´t forget to hit the bell, to subscribe and head over to our Patreon support page and donate your money. ALSO share with your friends, tell them to watch our content, head over to the other channels we have and subscribe there.

But first, let's hear the words from our sponsors!
 
Top Bottom