True, but MS use APIs for BC like the on PC which have DirectX. A new RTX 3090 PC can run at higher res and high frame rate although a game was written in 2016 and had no knowledge of said GPU. You can't push RT or new features in without a patch, but there is no reason you cannot harness the raw horsepower to brute force performance.
What you describe is linearity, if all you give your hardware is more power in a linear fashion, it will be easier for it to scale up nicely because what you're doing is taking bottlenecks out of the equation.
By easier I should note that doesn't necessarily come hand in hand with any notion of efficiency. The cost of rendering at much higher resolutions than initially intended can multiply cost several times making whatever the game is doing even more inefficient. It's all a matter of how have things evolved, best example would be take a single threaded full HD software renderer from 2013 (the year last gen consoles launched), yet if you tried to pull 4K now, on a modern CPU it would fluke harder than it did all those years ago. Because it's four times the work, your PC on paper is surely 4 times more powerful, but not in a linear way, IPC has not improved 400%, and frequency also hasn't so you're stuck. This was the reason we offloaded that work to GPU's in the first place.
Now, if you look at a game like this, that really wasn't updated to take advantage and offload a lot of things to the hardware side, there's quite a few bottlenecks that an API can't fix. You can't move CPU things to to GPU, or make things that were previously expensive to "free" ones, imagine this game had a taxing for the time Bokeh effect, it would cost more to do at a high resolution, than a modern Bokeh with low-level hardware support. A case for this is the Witcher 2 Cinematic Depth of Field thing which continues to be a nightmare on top range GPU's to this day.
The way this was designed, parallelized and optimized means the game code is not "wide" to take advantage of wider hardware than the one that existed then. For wide to work, you have to make everything independent (asynchronous). With Skyrim you can't do that, because it elected to do things with a lot of back and forth dependencies of the era.
Even if it the game core was proper multithreaded, the way it was written means gains while existing/better than before would be technically relative (against the gold standard of brute force solving everything), because you'd still be processing synchronous events on multiple cores/overhead meaning some things would finish really fast, yeah, then have to wait for the ones that didn't to finish before they move on.
EDIT: I looked into Xbox One vs PS4 videos for this game, as Xbox One had less GPU, but slightly more CPU frequency (1,6 on PS4 vs 1,75 GHz on Xbone), but wasn't able to see any Xbox One advantage that would suggest the game was CPU limited on PS4, against the Xbox One - quite the opposite (PS4 had better framerate, which is kinda usual seeing how severely bandwidth bottlenecked it was). So again, any guess is a good guess.