• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel details different frame generation tech — ExtraSS uses extrapolation instead of AMD and Nvidia's approach of Interpolation

LordOfChaos

Member


Intel is preparing to introduce its own frame generation technology similar to DLSS 3 and FSR 3, called ExtraSS (via WCCFTech). Detailed at Siggraph Asia in Sydney, ExtraSS is not just a clone of DLSS 3 or FSR 3: instead of using frame interpolation, it uses frame extrapolation. Although these two methods are very similar, extrapolation has some advantages that could set ExtraSS apart from its competitors.

On the whole, ExtraSS is pretty similar to DLSS's and FSR's respective frame generation technology. Intel has built on top of XeSS and makes use of motion vectors and spatial data to improve visual quality, but with extrapolation, the data used to make a new frame is very different. Instead of using two frames to create a new one to insert in between (that's the inter in interpolation), extrapolation takes just one frame to generate the new one.

The obvious disadvantage here is the lack of extra data to put into Intel's algorithm. Extrapolation requires a higher resolution input and could still result in lots of visual glitches and artifacts, as Intel admits in its white paper. However, the benefit is that there is a reduced latency penalty compared to interpolation, which has to delay frames so it can generate new ones (otherwise, they'd show up out of order).



Nice to see differing approaches being applied. The tradeoffs here may make sense, by reducing the latency penalty maybe it's better for lower framerates like for IGPs, boosting smoothness, where for example AMDs really only works above 60fps anyway because of the lag applied
 

Kuranghi

Member
Being dramatic: this sounds like it will be shit, interpolation even with all the engine information like motion vectors still has problems with complex motion (A repetitively-patterned object passing by a finely detailed chainlink fence for example), so extrapolation, without the future frame of data to "guess" based on is going to look even worse, who cares if there no lag penalty if it looks really artifacty. Especially since they're saying it better for lower framerates due to this difference, so there will be way more time/change between each frame, making the job even harder.

When you use DLSS 3 FG with a less than 60fps input does the input lag increase above what it would at the starting framerate? So for example, 30fps becomes 60fps but it still feels like 30fps? Or does it feel worse than 30fps? I know its not ideal but surely even if it feels exactly the same as 30fps, the increased motion smoothness still makes it worth doing overall.

With "ExtraSS" it sounds like it will have reduced input lag but it will look noticeably worse in motion vs. AMD/Nvidias interpolation methods. When I hear people complaining about 30fps its usually much more about the loss of motion clarity than the decreased responsiveness. Not to say thats not a big reason why people want higher framerates, but it seems like it secondary ime, so interpolation is the better method overall.
 

Reizo Ryuu

Gold Member
sounds like it might be extra-ass

Super Freak Flirting GIF by Rick James
 
We really need Intel to nail the software and hardware with battlemage. Desperately need a strong third player in the GPU market. AMD no longer seems to care much in the high end and Intel is just getting into the market.
 

LordOfChaos

Member
We really need Intel to nail the software and hardware with battlemage. Desperately need a strong third player in the GPU market. AMD no longer seems to care much in the high end and Intel is just getting into the market.

I hope they keep trying until they make it. The first gen was already doing better RT and upscaling with dedicated hardware than AMD if you compare the GPU raster performance tier. It's ok that it didn't set the world on fire and they had a generation to really improve their drivers drastically.

They're still a cashflow monster as much as the tiktok investor generation think they're dead, and with a serious effort could be a very serious threat to AMD in particular. Nvidia will probably remain the high end champ for a while to come and are chasing much higher margin AI training GPUs anyway.
 
Last edited:
Top Bottom