I was trying to fit that in my head as some sort of advantage and I could not, could you help me?
So I want to render some scene.
Smart ass on my team tells me: hey, we know that stuff which is behind the avatar will not be visible, so why don't we ignore that altogether to speed things up?
Hell yeah!
Now I add reflections and while it works cool, is darn fast too, the stuff that was optimized earlier leads to those objects not being rendered.
But why on earth would I need to do brute force RT-ing for reflections, to just NOT do that optimization? I could simply stop ignoring objects that are behind the avatar, it would be slower than optimized version, but darn faster than rendering all that with RT-ing.
It's not that the objects reflected behind the player are removed to save resources, but rather that they were never included in the first place, since SSR only traces rays to what is in the player's view. From that document I linked earlier:
"In a nutshell: we transform the reflection ray from view space into screen space, and then move along this ray until we‘step through the depth buffer’. By this algorithm, we hope to find the intersection of a ray against the scene geometry,which is stored in form of the depth buffer. That means, in particular,
we can only find intersections with geometry, which is already visible on the screen. This is why it is called a screen space effect. "
If you want to expand this technique include everything in the reflection, you would also need to build up a model of everything in the scene (not just what is visible in the player camera), and trace rays into that model to figure out exactly what would be reflected at the player's current position. Which is what ray tracing does, and is super expensive.
Planar reflections are kind of "hack" to get around this, by rendering another camera view to a texture. (So you are essentially rendering the whole scene again, from the perspective of the reflection).