He didn't say that. While higher clock speeds have their own benefits, what he said was,
"it is easier to fully use 36 CUs in parallel than it is to fully use 48 CUs". His reasoning behind that was:
"...when triangles are small it's much harder to fill all those CUs with useful work".
"Triangles are small"... Does this ring a bell? UE5's Nanite tech is just that. A micro polygon renderer - that can render huge amounts of triangles very fast while bringing the
size of those triangles down to the size of pixels according to Epic which results in increased geometric detail. This is perhaps where the industry is moving - starting from UE5.
After all, it's well-known at this point that Cerny is someone who doesn't shy away from speaking to devs be it first-party or third-party devs. He might have information as to where the industry is headed in the next few years, what most teams are planning to do with their engines, what rendering systems or features they're going to develop/utilize, etc. in order to push next-gen games. And let's not forget Cerny himself is a dev and has over 30 years of experience (which I'll admit is longer than my whole existence on this planet).
But I digress. His point was that it's easier to fully use a GPU with 36 CUs than a GPU with a larger number of CUs, which is true and we can see this in RDNA 2 CU scaling benchmark performed by the
computer base. They tested 40 CUs, 60 CUs, 72 CUs, and 80 CUs at identical 2000 MHz frequency. 40 CUs to 60 CUs is a 50% increase in physical core count, therefore a 50% increase in TFLOPs (10.24 TF to 15.36 TF). At 2 GHz - the 60 CU part, at best (in 4K), gained an avg. of just 36% of perf over the 40 CU part (remember the 60 CU part has a 33% bandwidth advantage on top). When you reduce the resolution, the gains for the bigger GPU get even smaller.
Now, factor in an increase to the clock speed for the smaller GPU, that 36% perf gains (best case scenario) get even smaller.
Case in point, here's another evidence:
Godfall's result disagrees with you. It doesn't exist on last-gen consoles.
I expect UE5 or other engines using micro polygons for their games/scenes will scale better on PS5. In the end, the perf difference will come down to being close for both consoles. You and others like you hoping and dreaming for a PS4 Pro vs X1X level of difference between PS5 and XSX are in for a rude awakening, unfortunately. It's just that. A dream.