that is theoretically true, but once again, PC tests show that for modern engines that is simply not the case... they easily use all available cores of a GPU, which is why you see an RTX2080ti outperform an RTX2080 at same clock speeds. if games wouldn't use the power of the additional hardware the games would run the same, which they don't
you run the same game on 2 GPUs, one running less active CUs at higher clocks and one running more CUs at lower clocks. both have the same peak TF performance. if high clocks would benefit the game engine the games would run better on the lower CU higher clock GPU, but they don't.
What you are saying is true, but that is because the frequency that the pc parts are shipping with, are already pretty near to being bottlenecked by one of its components (gpu components). Lets take a look at the rx 5700 and rx 5700xt which overclock to over 2 ghz. Doing so only gives you a marginal performance increase. The further you push, the less of an increase you get. At some point the performance even seems to decrease and the power does not scale anywhere close to being linear. Digital Foundry has a video on it. The main reason being, is that the architecture has reached a limiting factor or bottleneck if you will. So for Sony to push to such a high frequency must mean, that these bottlenecks have been addressed by AMD and a limiting factor should only appear by pushing the hardware even further than that 2.23 ghz. The used frequency Ideally is the sweet spot for the architecture and performance, up to that point could increase nearly linearly. That is what Sony should have pushed for in the best case scenario.
Regarding higher CU counts, I do agree that parallel workloads get increasingly harder on higher CU counts. But they are no where close to being as hard as CPUs. The example you give between the rtx 2080 and Ti are good, but they are for high level hardware access. Consoles give low level hardware access, meaning that as a developer you can pretty much write your code to that specific hardware and debug it with a ps5/xsx dev kit. The higher cu count ms has gone for, should not hinder anyone making excellent work on it. In fact the further we go into the next generation, the more both next generation systems will be able to show their true potential.
So what I am trying to say is, neither Sony nor Microsoft should be at a level that their frequency or gpu size would be bottlenecked by one of its sub components. Since that is very inefficient in its design and any uplift in frequency or size costs them a lot of money.
As to why that frequency is so high? The reason is still unknown, but I assume that they have planned for a specific workload to run faster. Certain workloads benefit heavily from increase in frequency and faster operations instead of parallel capability. I work on mobile devices and those are a bit different, but the same principles apply. On smartphones, especially android devices their clockspeeds are never sustainable, meaning that they throttle their clockspeeds in order to not overheat the entire system. This downclock in frequency is dependent on many factors one of them being ambient temperature for example.
So our process is like this, we look at a certain device, which we agree is the base for our performance goal. Then we take a look at what it’s general performance is like, how much it throttles under load in bad conditions, how well it handles specific scenes, especially harder ones and then optimise our code to that performance level. This is why benchmarks or metrics like teraflops are pretty useless to me, since they do not give guaranteed performance indications on a frame by frame base. Also they give a metric for a specific type of workload, different architectures give different results depending on the type of workload you are running.
The PlayStation 5 seems to running that frequency almost all the time, which is a good thing and unlike mobile devices it throttles based on a power budget in order to keep a certain temperature under any type of workload. If there are no bottlenecks inside the gpu architecture for the ps5 to run at such a high frequency the benefits could be really high, but that has yet to be seen. I am looking forward as we get to play on these new machines.