How badly affected are these GPU's in real world terms?
Like, do they perform noticeably worse? Do they die quickly?
There's actually little data on this beyond anecdotal stories (or the data manufacturers hold internally) on GPUs specifically.
However, there is electromigration and oxide breakdown that increase with heat on transistors (which we find on GPUs). Also, metals expand under heat and solder (also found on GPUs), as it is a metal alloy, will do that, too. That was often the reason so many Xbox 360 died back then; probably cheap and/or cheaply applied solder. Typical consumer behaviour is to heat up hardware for a short burst of gaming and cool it down. Unless your QA sucks, circuit boards and solder should be designed withstand the intended usage over the product's lifespan. I remember my dad telling me in the 90s it's bad to quickly switch on and off electronics. That's true to a point. Other data, however, suggest that junction temperature above 125°C will decrease life span exponentially, but those are temps never reached, even on a hotspot, on a GPU.
So in terms of your GPU there's a lot of data to process and apply when you want to give (or get) a definitive answer. If you're further interested on that topic, look up the
Arrhenius equation; and electromigration on GPUs.
I'd say if your GPU don't belong to those 3% average RMA cases (over all manufacturers) that will be faulty no matter how you run it anyway you'll be fine.
What could happen, though, is that the thermal paste became dust after prolonged use and heat. I bought a used Xbox One X and the fan was noisy as fuck, like a jet. This is what the paste looked like:
I removed it, applied new one and the fan ran fine and much more silent.
So if you buy a card with warranty already void, I'd recommend to check the paste.