No.
First off the console CPUs are decent now. You know that.
Compared to top-end CPUs out on the market now, the ones in the consoles are mid-tier. They're based on mobile Zen 2 designs, this is nothing new. Top-end PC CPUs available easily have more cores and threads, and that's just one example where they're ahead of the consoles.
Secondly general purpose CPUs in no way compete with custom designed hardware meant to perform a specific task. That's why specialized hardware exists.
They can depending on the task at hand and the total throughput of the dedicated hardware compared to the throughput of the general-purpose hardware. They both have tradeoffs; custom-designed hardware doesn't have an outright advantage over general-purpose hardware for all purposes.
That's why we have GPUs for crying out loud, unless you want to still live in a world of software based rasterizers. Sorry dude, you don't know what you're talking about. You're talking to a computer scientist.
You're definitely not the only one who knows about computer hardware. I'm just able to look at this more objectively and with better nuance, at least on this topic, apparently.
And why would you want to use your GPU to decompress data? Then it can't you know ... compute visuals.
Nvidia added decompression cores to their GPUs to aid in data decompression. In fact, GPUs in general have decompression hardware onboard since graphics data that is ready for GPU use is somewhat compressed anyway and sent over the memory bus. This helps alleviate bandwidth constrictions. IIRC, AMD's HBCC for Vega cards also had some decompression features built in, though that was more a means for enabling direct virtual mapping of NAND storage data with RAM (with things like SAM/BAR and new features in PCIe 4.0, I guess HBCC became redundant).
Higher-end GPUs have no issue doing what I just described, since they have the spare resources to do both.
Again, that's the point, the PS5 has a dedicated hardware solution that utilizes a superior compression scheme AND offloads it all on custom silicon. Not a CPU, not GPU cores. That's why games like Ratchet and the PS5 UE5 demo we saw can load things in on a per frame basis, depending on the movement of the camera.
The CPU still needs to coordinate some of the commands to the I/O processing units. Also I wouldn't go to Rift Apart or even the UE5 demo as chief examples for PS5's SSD I/O superiority; the former doesn't need anywhere near the speed or resources of the provided specs given various tests, and the latter has been matched in terms of complexity by multiplatform demonstrations with other tech pieces (not to mention, the concept illustrated in it is easily doable on other hardware with less robust SSD I/O, long as said hardware supports specific functions that would be required).
It's amazing to me that the same core group of people just can't let certain basic things go. The PS5 has a really impressive memory architecture, and it's used in games available now that are on the shelf now. It doesn't take away from other stuff that might be your favorite. Just let it go. Of course DirectStorage will make a "difference" in the future. But that's the point. It's still going to be years before it's used regularly across games, and it's still not as good as the solution in the PS5. You can deny that, but that's called not living in reality.
Why are you being so defensive over the PS5's SSD I/O? I hate to break it to you but at a fundamental level it's not doing anything outright "amazing" that no other solution can't match. It's a specific implementation for storage I/O that Sony felt useful for their specific console design, but there are equivalent implementations and approaches available and in certain markets like enterprise, have been available for years if not almost a decade.
No need to get so defensive about this just because there are faster SSD drives coming about
. Besides I never said PS5's SSD I/O wasn't impressive; that said it's not the "motherland holy grail of tech" some of you are so keen to hyping through the roof. It's great stuff, but it's not in a league of its own.
Anyway, don't want to rehash this stuff and get into what amounts to silly warring, so I am out of this thread. You guys need to stop being so defensive over this stuff. It's insane this bothers you. Just get all the platforms (last game I played was Halo Infinite, what a shock) and have fun instead of doing ... whatever this is.
Dude this is a bit hilarious considering you were the one being super-defensive on this xD
Edit: Oh yeah, the XSX has a better CPU and generally better GPU than the PS5. Oh no, the humanity! See? It's not hard to just say how things are without it affecting your emotional state.
What does Series X having a "better" (reality essentially the same feature-wise, just somewhat faster-clocked) CPU and "generally" better (depending on what things you want to do related to compute & raytracing; rasterization and cache management favors PS5's) GPU than PS5, have to do with me saying there are CPUs available now that outpace the ones in Series X & PS5 by a considerable margin?
You actually could put this drive into a PS5 and use it for expansion. PCI Express is backwards compatible. Just that the drive would top out at maximum PCIe Gen4 speeds when inside the PS5. So Gen5 drives will still offer very fast expansion options for PS5 owners.
It's not just about bandwidth; there are features to 5.0 not supported with 4.0 so a device on PCIe 4.0 interface won't be able to leverage PCIe 5.0-specific features:
PCIe 5.0 simple breakdown
The features in question don't mean much to a gaming console, but it's something worth keeping in mind.