I seriously doubt that. It's very rare for a game to require more than the latest technology such that running it at the lowest feasible resolution possible doesn't yield a locked 60FPS. A 3090 will be good for all games for another 7yrs easily. A new generation of consoles after this one still will lag behind the actual GPUs coming from Nvidia/AMD. We often forget that it takes years to develop a console after the initial decision to pick parts for that console. If PS6 comes out in 7yrs, Sony has at most 1 more year from now to decide on a piece of hardware. They don't decide based on the latest graphics cards 6yrs from now and then make the console in year 7.
Since the Ampere cards (specifically the 3090) were such a huge step above the 20-series boards, I don't see a PS6 having that kind of silicon for $500. I can see a 3060Ti however. The 1080Ti was the power equivalent of the PS5 that was picked when 2080Ti had been on store shelves for some time. Consoles are that far behind.
You are seriously underestimating the advancement of GPUs over a 7 year time frame. When the PS4 launched the top end GPU was the GeForce GTX 780 Ti and the Radeon R9 290X, both around 2.5x times as fast as the PS4 equivalent GPU, the Radeon 7850. The R9 290X especially was 5.6 TFLOP, roughly a 3x increase in raw numbers over the PS4 GPU. Both metrics are larger then the difference between the XSX GPU and the 3090/6900XT in normal rasterization performance. The XSX GPU is equal to a ~2080 and the 3090 is only 70% faster then that GPU. The 3060ti is also less then 10% more powerful then the GPU in the XSX. That won't even cut it for a mid-gen refresh. Games that are going to stress the consoles in 5 or so years time (aka 1440p/30FPS) will ensure the 3090 struggles with 1440p/60FPS on those titles, especially with the higher settings that PCs run at. I don't think the 3090 will ever need to drop down to 1080p for this generation however.
Assuming the next generation consoles are on 3nm, they are going to be quite a bit faster then the 3090 can ever hope to be. Even if RDNA 2 currently lags behind in RT performance, the AMD of today is not the AMD of 7-9 years ago. The Radeon division is not going to be starved of money and hemorrhage talent like it has since then. AMD over the next 7 years will be putting far more money and resources into GPU research that it has previously, now that they are no longer on the point of bankruptcy (and are frankly dominating the CPU segment on all fronts). MCM module GPUs is going to directly address the issue with monolithic GPU dies that AMD and Nvidia are currently facing as well. Radeon is playing catch up with DLSS and RT performance, but DLSS is a software solution that does not require Tensor cores to run (although it helps) and the second one is obviously going to improve dramatically over the next 7 years. They managed to catch Nvidia in normal rasterization and with much better performance per watt to boot.
I can easily see the next Xbox and PS6 being 4-5x times as powerful as the current consoles at a minimum. For the PS6 especially, Sony can just double the current GPU size and increase clock speeds by another 50% (doable on 3nm, current RNDA 2 7nm cards reach 2.6Ghz), that combined with IPC increases will mean a 4x increase over the current PS5 GPU. That would obliterate the 3090 in rasterization. Bandwidth can certainly scale up as well, with the advancements being done in GDDR and HBM.