• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel claims big performance improvements with their drivers since Arch launched

winjer

Gold Member


  • 1080p Avg FPS: Up to 77% improvement
  • 1080p 99th Percentile Normalized: Up to 114% improvement
  • 1440p Avg FPS: Up to 87% improvement
  • 1440p 99th Percentile Normalized: Up to 123% improvement
  • Aggregate DX9 FPS improvement: 43%
  • Aggregate DX9 99th Percentile improvement: 60%

Intel is also now boldly asking people to retest the Arc A750 GPU and is stating that their card now offers up to 52% better performance per dollar than the RTX 3060 which has an average selling price of roughly $391 as of January 26,2023.

i5cTGAP.jpg


I1Qjpyz.jpg
UZ1Mvzg.jpg



Of course this is only what Intel claims, but still, it's impressive to see how much effort Intel has put into improving their drivers.
We now actually have a serious contender on the GPU space.

Also, Intel cut the price of the A750 to 250$.
 
Last edited:
Noice! Rather big imrovements were already talked about weeks after launch for single titles, but I guess now they brought a broader catalogue into the ring.
Was cautiously optimistic since launch for Arc products, and they hardly were for everyone - the A380 specs certainly could not meet my requirements- maybe they hoped to get there sooner, or maybe it was a test balloon anyway with expected loss, but Battlemage should launch with a much better foundation.
Nvidia features for AMD price and then also at least AMD driver performance...?
 

GreatnessRD

Member
Good to hear the drivers are maturing. Gives me hope they will be serious with Battlemage.

Unless its cancelled like Moore's Law is Dead said!
 

winjer

Gold Member
Good to hear the drivers are maturing. Gives me hope they will be serious with Battlemage.

Unless its cancelled like Moore's Law is Dead said!

MLID is a dumbass. Don't trust anything he says.
Battlemage is in development as we speak.

 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Glad they are still working on the drivers and they really have improved their legacy support.
They have slowed down a bunch in month 3, but looking at how fast they fixed their issues.....I told people not to play with Intel.
P3kJWVr.png





That’s is actually really impressive, these might actually be viable soon
If the RTX 3060 is a viable card the A770 is a way way viable card and always was if you were someone who played DX12 or Vulkan titles.
The main issue with the launch was that Intel didnt focus on legacy APIs and the Desktop App/Drivers were kinda borked at launch.

As is right now and honestly since like a month after launch their drivers were stable and the desktop app stopped bricking SSD/HDDs.
But the fact it used to brick SSD/HDDs is a bad bad first impression.

If Intel does a refresh of the Alchemist cards or skips straight to Battlemage they are in a really really good spot.

the two people who use these gpus will be happy
AMD market share in GPU space dropped something like ~8%.....Intel gained ~10% market share.
Most of the market share Intel gained was from AMD cuz Nvidia only lost ~2%.
Imma go out on a limb and say if the A refresh or B series drops before AMD has proper mid/low range cards.
Intel will be second place in the GPU space.
 

Pagusas

Elden Member
By 2 generations they will be superior to AMD.

And even than you'll still have certain users coming in here telling us how AMD GPU's are the best.

The truth is Intel wont be stealing many users from Nvidia, it's going to eat AMD's lunch, and we'll be back to a 2 GPU market lol. God I hope Intel can get it completely together and launch a flagship card in a few gens that punches Nvidia in the fact. Imagine how good these cards could be if Nvidia had real competition.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
And even than you'll still have certain users coming in here telling us how AMD GPU's are the best.

The truth is Intel wont be stealing many users from Nvidia, it's going to eat AMD's lunch, and we'll be back to a 2 GPU market lol. God I hope Intel can get it completely together and launch a flagship card in a few gens that punches Nvidia in the fact. Imagine how good these cards could be if Nvidia had real competition.
Nobody really buys range topper cards.

So Intel is actually being smart by releasing mid range cards.
If they can hold the xx70, xx60 crown, they are solid for life.
Halo products dont really matter cuz so few sell.

If say their C770 is ~10fps behind the RTX 6080, with better Raytracing and comparable or better upscaling tech in XeSS.
Theyve pretty much won that generation.

The RTX 6090 might be the best card in the world for raw horsepower and cost 2000 dollars.
But for that money you could build a whole Intel PC that manages to max out your 4K120 panels anyway.
Whats the extra horspower for?

Intel will have market share and the ability to not just moneyhat XeSS into games but have devs actually want to use XeSS.
If their iGPUs keep improving too....Nvidia and AMD are in for a fight on their hands cuz those not gaming laptops that can still game are gonna be Intel Arc inside.

The speed that they "fixed" their driver issues really is a testament to Intels abilities.
 

64bitmodels

Reverse groomer.
If say their C770 is ~10fps behind the RTX 6080, with better Raytracing and comparable or better upscaling tech in XeSS.
Theyve pretty much won that generation.
no way in hell intel will outdo Nvidia in Raytracing performance. It's a pipe dream
Nvidia were the first to the tech and they know it better than anyone else
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
no way in hell intel will outdo Nvidia in Raytracing performance. It's a pipe dream
Nvidia were the first to the tech and they know it better than anyone else
Intels Raytracing solution is already better than Nvidias.
Intel chips are just "currently" physically smaller, die for die the Intel Arc will out perform an equalvalent RTX die at Raytracing, especially if devs take Intels advice on how to further accelerate realtime raytracing.

Read up on the Intel Arc RTU and TSU, they outdo the RT Cores in Nvidia hardware.
arc-ray-tracing-hardware-1-1536x864.png



thread-sorting-ray-tracing-1.png



asynchronous-ray-tracing-1.png
 
Last edited:

winjer

Gold Member
Intels Raytracing solution is already better than Nvidias.
Intel chips are just "currently" physically smaller, die for die the Intel Arc will out perform an equalvalent RTX die at Raytracing, especially if devs take Intels advice on how to further accelerate realtime raytracing.

Read up on the Intel Arc RTU and TSU, they outdo the RT Cores in Nvidia hardware.

From what I understand, nvidia's SER can do something very similar to Intel's Thread Sorting.

Advanced ray tracing requires computing the impact of many rays striking numerous different material types throughout a scene, creating a sequence of divergent, inefficient workloads for shaders (shaders calculate the appropriate levels of light, darkness, and color during the rendering of a 3D scene, and are used in every modern game).
Shader Execution Reordering (SER) technology dynamically reorganizes these previously inefficient workloads into considerably more efficient ones. SER can improve shader performance for ray tracing operations by up to 3X, and in-game frame rates by up to 25%.

But NVidia's RT core has more advanced features, especially to deal with alpha textures, and much faster BVH processing.

The new RT Cores also include a new Opacity Micromap (OMM) Engine and a new Displaced Micro-Mesh (DMM) Engine. The OMM Engine enables much faster ray tracing of alpha-tested textures often used for foliage, particles, and fences. The DMM Engine delivers up to 10X faster Bounding Volume Hierarchy (BVH) build time with up to 20X less BVH storage space, enabling real-time ray tracing of geometrically complex scenes.
 

Crayon

Member
I'm on AMD for the open source linux drivers. Ima take a look at how arc is doing on linux for my next upgrade and I will try to give intel the benefit of the doubt. The hardware and the prices seem to be good and hey someone's got to buy it if we want them to keep trying.
 
Last edited:

GreatnessRD

Member
MLID is a dumbass. Don't trust anything he says.
Battlemage is in development as we speak.

My sarcasm didn't come through with text. That's what the exclamation point was for at the end, lol. MILD is nothing but satire. I still watch him for the entertains sometime.
 

Buggy Loop

Member
Christian Bale GIF by PeacockTV


Crazy ramp up in drivers, so fast. With the $50 price drop, it's starting to be a good recommendation for mid-range. Intel 2nd gen could honestly scoop mid range if they keep the same philosophy and better flex their silicon and drivers. While AMD and Nvidia are fighting for halo products, mid range is wide open for a low price alternative.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
From what I understand, nvidia's SER can do something very similar to Intel's Thread Sorting.



But NVidia's RT core has more advanced features, especially to deal with alpha textures, and much faster BVH processing.
Unfortunately the RTX pipeline as a whole is actually whats holding Nvidia back, and I say holding back very very loosely cuz its still some crazy shit to have "performant" real time raytracing....its weird to even say it considering the RTX 40s just came out and those are Gen 3 RT Cores, but basically legacy stuff is holding them back from doing a big jump forward in the pipeline and they effectively brute force their way forward.

Intel got to start with a clean slate having seen what Nvidia and AMD were doing, and using their own knowledge in the offline Raytracing space.
They stole hired RTX engineers to work on Arc and their pipeline is well ahead of AMDs and rivals and by many peoples metrics actually beats Nvidias.
In die for die, the A770 suddenly starts punching well above its weight class whenever RT is enabled.
And this is basically with games and APIs that arent "optimized" for this better pipeline.

Theres a Blender developer who even commented on this and basically said once the Intel API that actually takes advantage of the hardware is released they are gonna be the most efficient GPU renderers.

People seem to forget or just dont know that Intel have actually been in the RayTracing game for a minute, and their tech is already award winning.
They didnt just jump in now willy nilly, they were already a heavily respected team in the offline render space, they just came with their own hardware for the realtime render space.
Intel Embree Raytracing, Intel Open Image Denoiser, Intel OSPRay and Intel Open Path Guiding library are already things that many people who are into graphic tech or use offline renderers are super impressed by, their own hardware was the logical next step....and they havent disappointed yet.


maxresdefault.jpg



intel-embree.png
 

winjer

Gold Member
Unfortunately the RTX pipeline as a whole is actually whats holding Nvidia back, and I say holding back very very loosely cuz its still some crazy shit to have "performant" real time raytracing....its weird to even say it considering the RTX 40s just came out and those are Gen 3 RT Cores, but basically legacy stuff is holding them back from doing a big jump forward in the pipeline and they effectively brute force their way forward.

Intel got to start with a clean slate having seen what Nvidia and AMD were doing, and using their own knowledge in the offline Raytracing space.
They stole hired RTX engineers to work on Arc and their pipeline is well ahead of AMDs and rivals and by many peoples metrics actually beats Nvidias.
In die for die, the A770 suddenly starts punching well above its weight class whenever RT is enabled.
And this is basically with games and APIs that arent "optimized" for this better pipeline.

Theres a Blender developer who even commented on this and basically said once the Intel API that actually takes advantage of the hardware is released they are gonna be the most efficient GPU renderers.

People seem to forget or just dont know that Intel have actually been in the RayTracing game for a minute, and their tech is already award winning.
They didnt just jump in now willy nilly, they were already a heavily respected team in the offline render space, they just came with their own hardware for the realtime render space.
Intel Embree Raytracing, Intel Open Image Denoiser, Intel OSPRay and Intel Open Path Guiding library are already things that many people who are into graphic tech or use offline renderers are super impressed by, their own hardware was the logical next step....and they havent disappointed yet

Not really. DXR 1.0 was made mostly on the backbone of NVidia's RT hardware. So it is well optimized to take advantage of NVidia RT cores.
DXR 1.1 was made to better suit AMD's RT implementation, both in consoles and PC.
Intel was left in a situation where no version of DXR is optimal for them, especially DXR 1.1
 

The_Mike

I cry about SonyGaf from my chair in Redmond, WA
I hope Intel will push more for good performance and lower prices so nvidia finally gets some competition.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Not really. DXR 1.0 was made mostly on the backbone of NVidia's RT hardware. So it is well optimized to take advantage of NVidia RT cores.
DXR 1.1 was made to better suit AMD's RT implementation, both in consoles and PC.
Intel was left in a situation where no version of DXR is optimal for them, especially DXR 1.1
What?
 

Raphael

Member
Would a 13500 be a good cpu with the a750? Mostly for strategy games. 1080p ultrawide. Want something that will play well the next couple of total war games on high.
 

RoboFu

One of the green rats
This is just them fixing their crappy dx9 emulation. Which is good I guess.

But really intel has the best opportunity to hit the low end gpu market right now. Nvidia and AMD don’t seem to eager to throw out low priced low end cards at the moment. Intel needs to throw out a sale and bundle some new games asap.
 

Buggy Loop

Member

Not sure where he’s going with that..

Peoples act like Nvidia went in the consortium for DXR and taught peoples what ray tracing is.. it’s math.. very well known maths. When AMD/Nvidia/Microsoft and probably others are in the consortium, they all know what they are determining and on top of that, it’s for an hardware agnostic API.

Only thing that got introduced later was inline ray tracing which is a very niche solution when devs don’t have many dynamic shaders. Goal is to limit shader register size and not have divergent shader models. It’s very specific. Devs have to analyze if their game will even benefit from inline raytracing, it might not. And Nvidia benefits from that too when it works, it’s in their best practice document. What’s « for AMD » is that the pipeline is not choking.

What’s saving Nvidia RT now is the ASIC nature of their RT core, probably still unmatched in speed for ridiculous heavy RT like path tracing and probably overkill for insignificant RT effects. Cyberpunk 2077 overdrive will probably showcase how their 3rd gen core is tailored made for that. They’ll eventually have to drop that ASIC, or maybe not, who the hell knows what rabbit they’ll pull out.

But yes, Intel made their homeworks. Did they patch it to work in portal RTX yet?
 
Last edited:

winjer

Gold Member

The specs for an API like DirectX are always done in a consortium between MS and the GPU makers. GPU makers will propose certain specs to be adopted in the DX API.
NVidia was the first to market with a GPU with RT capabilities. Not only that but also with support for Mesh Shaders, Variable Rate Shading and Sampler Feedback.
Mesh Shaders, Variable Rate Shading and Sampler Feedback, became the basis for DX12_2, as they were the most advanced and complete.
And because NVidia was the only one with RT units on a GPU, the specs they proposed were the ones accepted. So DXR 1.0 was made to NVidia hardware.
Eventually, AMD released RDNA2 with a different implementation of RT. Especially with regards to the BVH.
And this was the GPU for consoles, including the Series S/X. So MS created a more adequate DXR version for AMD GPUs: DXR 1.1

This is from the MS presentation about DXR 1.1

USING DXR 1.1 TO TRACE RAYS
  • DXR 1.1 lets you call TraceRay() from any shader stage
  • Best performance is found when you use it in compute shaders, dispatched on a compute queue
  • Matches existing asynchronous compute techniques you’re already familiar with
  • Always have just 1 active RayQueryobject in scope at any time

The optimized path for NVidia is DXR 1.0 approach, with rays setup at the first stage and through a regular graphics pipeline.

With DXR 1.1 inline functionality, rays can be fired at any stage of the rendering pipeline, and works best with a compute pipeline.
With AMD's Ray Accelerators in the TMU, you never want to do RT when you need to run the texture mapping units or doing texture blending. Doing so will lower performance.
 
Last edited:

Buggy Loop

Member
The specs for an API like DirectX are always done in a consortium between MS and the GPU makers. GPU makers will propose certain specs to be adopted in the DX API.
NVidia was the first to market with a GPU with RT capabilities. Not only that but also with support for Mesh Shaders, Variable Rate Shading and Sampler Feedback.
Mesh Shaders, Variable Rate Shading and Sampler Feedback, became the basis for DX12_2, as they were the most advanced and complete.
And because NVidia was the only one with RT units on a GPU, the specs they proposed were the ones accepted. So DXR 1.0 was made to NVidia hardware.
Eventually, AMD released RDNA2 with a different implementation of RT. Especially with regards to the BVH.
And this was the GPU for consoles, including the Series S/X. So MS created a more adequate DXR version for AMD GPUs: DXR 1.1

This is from the MS presentation about DXR 1.1



The optimized path for NVidia is DXR 1.0 approach, with rays setup at the first stage and through a regular graphics pipeline.

With DXR 1.1 inline functionality, rays can be fired at any stage of the rendering pipeline, and works best with a compute pipeline.
With AMD's Ray Accelerators in the TMU, you never want to do RT when you need to run the texture mapping units or doing texture blending. Doing so will lower performance.

Turing, 2018 hardware, was ready day one for DXR 1.1 features 🤷‍♂️

It’s what Nvidia recommends in their best practices, but as Microsoft states in their detailed explanation of inline ray tracing, it can happen that you have too many dynamic shaders that it’ll not be worth it. Nvidia warns of this too.

Minecraft RTX got a DXR 1.1 patch a while ago, September 2020 I believe. It boosted performances for Turing/Ampere. I guess someone for sure did dig the RDNA 2 Minecraft DXR 1.0 vs 1.1?

But in the end, path tracing.. DXR 1.1 or 1.0, simply doesn’t bode well for AMD.



At this point in time, until AMD makes their own path tracing demo.. it does seem like Nvidia is simply way ahead the harder the effects.

All this for their hybrid pipeline that saves silicon area for more rasterization.. if it did actually show a substantial advantage in rasterization, it would be a legit good engineering decision.. alas
 

winjer

Gold Member
Turing, 2018 hardware, was ready day one for DXR 1.1 features 🤷‍♂️

It’s what Nvidia recommends in their best practices, but as Microsoft states in their detailed explanation of inline ray tracing, it can happen that you have too many dynamic shaders that it’ll not be worth it. Nvidia warns of this too.

Minecraft RTX got a DXR 1.1 patch a while ago, September 2020 I believe. It boosted performances for Turing/Ampere. I guess someone for sure did dig the RDNA 2 Minecraft DXR 1.0 vs 1.1?

But in the end, path tracing.. DXR 1.1 or 1.0, simply doesn’t bode well for AMD.



At this point in time, until AMD makes their own path tracing demo.. it does seem like Nvidia is simply way ahead the harder the effects.

All this for their hybrid pipeline that saves silicon area for more rasterization.. if it did actually show a substantial advantage in rasterization, it would be a legit good engineering decision.. alas


Wait a minute, did you think I was saying that NVidia hardware would lose performance with DXR 1.1?
Sorry if I wasn't clear, but that was not what I was saying.
With or without inline ray-tracing, NVidia RT will always work well. Sometimes it might even work better, if it can cast Rays at any point in the execution pipeline.

The issue is with AMD. Because RDNA2 uses a Ray Accelerator in the TMU, this means that there will be a performance hit, anytime a game tries to calculate a Ray, when the TMU is in use to operate a texture.
So DXR1.1 is a nice thing to have for NVidia. But it's essential for AMD to maintain performance.

You also have to consider that DXR 1.1 just adds a handful of extra instructions, on top of DXR 1.0. It does not replace it.
So even when a game is using DXR 1.1, it's still using most, if not all of the NVidia specs for DXR 1.0
 

Miyazaki’s Slave

Gold Member
Ok…I picked up a 770 and 750 at microcenter on the cheap to mess around with them for a bit. HOLY CRAP I cannot believe how good these cards are.

9th Gen core i9-9900k, 32gb ddr4, Gen3 NVME, and the the 770 runs HW:L in 4K at 60fps with intel XEES (or whatever it is called) in High Quality mode (just running around inside the castle).

Very surprised by how good both are. At 1440p the 750 card is a damn beast.
 

Buggy Loop

Member
Ok…I picked up a 770 and 750 at microcenter on the cheap to mess around with them for a bit. HOLY CRAP I cannot believe how good these cards are.

9th Gen core i9-9900k, 32gb ddr4, Gen3 NVME, and the the 770 runs HW:L in 4K at 60fps with intel XEES (or whatever it is called) in High Quality mode (just running around inside the castle).

Very surprised by how good both are. At 1440p the 750 card is a damn beast.

Congrats

These cards are becoming interesting value propositions. Intel is gaining driver performances way faster and way more than i would have thought possible. Makes me excited for their gen 2.
 

winjer

Gold Member

INTEL Arc & Iris Graphics 31.0.101.4311​


Game performance improvements versus Intel® 31.0.101.4257 software driver for:​

Dead Space Remake (DX12)

  • Up to 55% uplift at 1080p with Ultra settings on Arc A750
  • Up to 63% uplift at 1440p with High settings on Arc A750
F1 22 (DX12)

  • Up to 6% uplift at 1440p with High settings on Arc A770
  • Up to 7% uplift at 1440p with High settings on Arc A750
  • Up to 17% uplift at 1080p with Ultra High Ray Tracing settings on Arc A750
Dying Light 2 Stay Human (DX12)

  • Up to 6% uplift at 1080p with High Ray Tracing settings preset on Arc A770
  • Up to 7% uplift at 1440p with High Ray Tracing settings preset on Arc A770
Dirt 5 (DX12)

  • Up to 8% uplift at 1080p with Ultra High Ray Tracing settings on Arc A750
  • Up to 4% uplift at 1440p with Ultra High Ray Tracing settings on Arc A750
Deathloop (DX12)

  • Up to 4% uplift at 1080p with Very High and Ray Tracing Performance settings on Arc A750
  • Up to 6% uplift at 1440p with Very High and Ray Tracing Performance settings on Arc A750

Fixed Issues​

Intel® Arc™ Graphics Products:

  • Microsoft Flight Simulator (DX11) may experience application crash during gameplay.
  • Sea of Thieves (DX11) may exhibit color corruption on water edges.
  • Bright Memory Infinite Ray Tracing Benchmark (DX12) may experience lower than expected performance.
  • Blackmagic DaVinci Resolve may exhibit color corruption with Optical Flow.

Known Issues​

Intel® Arc™ Graphics Products:

  • System may hang while waking up from sleep. May need to power cycle the system for recovery.
  • GPU hardware acceleration may not be available for media playback and encode with some versions of Adobe Premiere Pro.
  • Topaz Video AI* may experience errors when using some models for video enhancement.
  • Intel® Iris™ Xe MAX Graphics Products:
  • Driver installation may not complete successfully on certain notebook systems with both Intel® Iris™ Xe + Iris™Xe MAX devices. A system reboot and re-installation of the graphics driver may be required for successful installation.
Intel® Core™ Processor Products:

  • Total War: Warhammer III (DX11) may experience an application crash when loading battle scenarios.
  • Call of Duty Warzone 2.0 (DX12) may exhibit corruption on certain light sources such as fire.
  • Conqueror’s Blade (DX12) may experience an application crash during game launch.
  • A Plague Tale: Requiem (DX12) may experience application instability during gameplay.
  • Battlefield: 2042(DX12) may exhibit color corruption at the game menu.
  • Crime boss (DX12) may experience texture flickering when XESS is enabled.
  • Call of Duty: Modern Warfare 2 may experience color corruption in QuickPlay Lobby
 

Three

Member
Intels Raytracing solution is already better than Nvidias.
Intel chips are just "currently" physically smaller, die for die the Intel Arc will out perform an equalvalent RTX die at Raytracing, especially if devs take Intels advice on how to further accelerate realtime raytracing.

Read up on the Intel Arc RTU and TSU, they outdo the RT Cores in Nvidia hardware.
arc-ray-tracing-hardware-1-1536x864.png



thread-sorting-ray-tracing-1.png



asynchronous-ray-tracing-1.png
How does this scale for a bigger chip though?
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
How does this scale for a bigger chip though?
Thats exactly why Intel is gonna be a player to watch with their second generation of Arc GPUs.
Their first gen was pretty much only let down at launch by drivers, but the hardware and actual underly tech is solid.

If they give us an RTX 3080/RTX 4070 class card with much better RT, thats a legit 1440p card worth looking at in the coming generation.
With more devs implementing XeSS and them working super hard on their drivers and software, Intel isnt the laughing stock everyone thought they were going to be.

They are unlikely to make an RTX 4080 level card, but aiming at that sub 600 dollar RTX 4070 like card level card then Intel should actually slowly but surely gain marketshare.
 

LordOfChaos

Member
Good improvements. I hope they don't give up on dedicated GPUs. The first generation not lighting the world on fire doesn't matter. Keep trying and iterating for several more generations and they may well have a viable third offering in the market.

Hell, their first try RT performance already bested AMD.
 
Top Bottom