Xdrive05
Member
Question in the title. I'm talking about Nvidia's RTX and AMD's RDNA2 hardware ray tracing solutions. Of course I understand that using ray tracing hits the hardware pretty hard, but if you DON'T use it, then does that extra silicon just go to waste? Or does it help get non-RT frames out the door faster?
My google-fu is failing me as I can't seem to find a straight answer to this question. I may also be misunderstanding what "RT cores" even means. I guess my same question would apply to "tensor cores" too. Thanks in advance for any help clarifying this.
My google-fu is failing me as I can't seem to find a straight answer to this question. I may also be misunderstanding what "RT cores" even means. I guess my same question would apply to "tensor cores" too. Thanks in advance for any help clarifying this.