Elog
Member
Timestamp? Where are you getting this observation from?
Apologies for the late response - work came inbetween.
Let's go through the two sections that are most relevant.
1) First Linus runs ATTO (best timestamps at 7:28 and 7:37) - a synthetic Windows benchmark tool. Looking at the graph you see the file size tested and the read/write speeds you get in theory. As you can see the R/W throughput increases with file size since latency is dominant, i.e. the larger the file size the faster the R/W. Please also note that the ATTO uses L1 and L2 cache to not be slowed down by the CPU/RAM, i.e. it is theoretical. You can see this in the 7.28 timestamp where RAM is at just 4% and the CPU is at 3%.
Depending on the file size the bandwidth range is from roughly 40MB/sec to 20GB/sec with a latency/response time of 4,000ms.
2) Then Linus runs 16 files at 4K in parallel (best timestamp at 8:11 to 8:14). Please note that 16 huge files in windows in a best best case scenario since for example textures come in 1000's of files that push down performance similarly as with ATTO above. Here you see that when reading these files into RAM it utilises 32 cores at 100% (roughly 50% of the 3990x thread ripper) so the CPU power required is immense. Secondly, you see that the read throughout is at 1.5 GB/sec at 50% disk utilisation. Is there another bottleneck? Or is the 'real' throughout 3 GB/sec in this best case Windows example? I do not know. The response time is good (not surprising with only 16 files) at around 1ms.
Net-net: Using 1000's of files under Windows would most likely give you around 1.5GB/sec and latencies/response times around 1000ms and would required 32 CPU cores+ to pull off. This is below what was shown with the PS5 (according to Epic). And we have not touched the VRAM transfer and driver overhead to these numbers to make it comparable to the Epic showcase.
I still think the Linus video is impressive but if the PS5 delivers what Epic and Cerny claims it is in a different league. See TS response below: