• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel Arc Announced. High Performance Graphics. DX12 Ultimate. Hardware Ray Tracing. AI Super Sampling. Launches Q1 2022.

ZywyPL

Banned
Ssex


Great, now nobody will know if by XSS you're talking about Series S or Intel's upscaling tech... Anyway, I wonder if it'll have to be individually implemented into the titles just like DLSS and FSR, or available for general use on the driver level, IMO whoever get's that done first will dominate the market.
 
Last edited:

John Wick

Member
You actually have to be capable enough to compete in order to deliver competition in any context. For Intel, this remains to be seen so far.

This is a common misconception I see on the internet.

Competition =/= just showing up. You actually have to be competitive with your offering.
What are you blabbering on about? Are you saying Intel aren't capable enough?
They are a massive company with plenty of resources and cash. They will add competition to the market. I expect them to do well and get a foot into the doorway. Then with the next release they will improve. Just them entering the market will put pressure on Nvidia and AMD to do better.
 
What are you blabbering on about? Are you saying Intel aren't capable enough?
They are a massive company with plenty of resources and cash. They will add competition to the market. I expect them to do well and get a foot into the doorway. Then with the next release they will improve. Just them entering the market will put pressure on Nvidia and AMD to do better.

Name a single Intel produced discrete desktop GPU with comparable performance to AMD and NVidia's best cards?

What?... You can't?... Well that's because they're new to this domain and no amount of simply throwing money at the problem will allow them the ability to leapfrog the literal decades of high-end GPU design experience, expertise and patented know-how of NVidia and AMD, to magically produce a superior product on the first try.

Heck, it's taken AMD how long to even catch up to the gaming performance and performance per watt of NVidia's GPUs. What makes you think Intel will be able to be competitive on their first attempt. You need to temper your expectations.

We're not talking about a pre-school baseball league. We're talking about the discrete, desktop GPU market, which is one where the performance differences between competing products are so small that individual differences get magnified beyond all reasonable recognition. Even if Intel's first desktop GPUs fall behind AMD/NVidia's worst products by 20% in performance per watt, they won't be considered to be competitive just because of how tight the performance margins of this market are.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Name a single Intel produced discrete desktop GPU with comparable performance to AMD and NVidia's best cards?

What?... You can't?... Well that's because they're new to this domain and no amount of simply throwing money at the problem will allow them the ability to leapfrog the literal decades of high-end GPU design experience, expertise and patented know-how of NVidia and AMD, to magically produce a superior product on the first try.

Heck, it's taken AMD how long to even catch up to the gaming performance and performance per watt of NVidia's GPUs. What makes you think Intel will be able to be competitive on their first attempt. You need to temper your expectations.

We're not talking about a pre-school baseball league. We're talking about the discrete, desktop GPU market, which is one where the performance differences between competing products are so small that individual differences get magnified beyond all reasonable recognition. Even if Intel's first desktop GPUs fall behind AMD/NVidia's worst products by 20% in performance per watt, they won't be considered to be competitive just because of how tight the performance margins of this market are.

Intel poached engineers.
Intels R&D costs dwarf AMD and Nvidia.
Intel are being realistic, their first few products are expect to be 3060~3070+ levels of performance....you know the most popular segment of the market.

Im going to hedge a bet that the Alchemist will be inline if not better than AMD on perf per watt, they are using N6 node the revision of the N7 node used by the RX6000 series.

 
Intel poached engineers.

Every company does this.

Intels R&D costs dwarf AMD and Nvidia.

Yes because they own and operate their own fabs... apples to oranges.

And yet despite this, they were stuck on 10nm for how long? Meanwhile, TSMC has been eating their breakfast, lunch and dinner.

Intel are being realistic, their first few products are expect to be 3060~3070+ levels of performance....you know the most popular segment of the market.

Im going to hedge a bet that the Alchemist will be inline if not better than AMD on perf per watt, they are using N6 node the revision of the N7 node used by the RX6000 series.


You're an optimist... we'll see.
 

FStubbs

Member
Name a single Intel produced discrete desktop GPU with comparable performance to AMD and NVidia's best cards?

What?... You can't?... Well that's because they're new to this domain and no amount of simply throwing money at the problem will allow them the ability to leapfrog the literal decades of high-end GPU design experience, expertise and patented know-how of NVidia and AMD, to magically produce a superior product on the first try.

Heck, it's taken AMD how long to even catch up to the gaming performance and performance per watt of NVidia's GPUs. What makes you think Intel will be able to be competitive on their first attempt. You need to temper your expectations.

We're not talking about a pre-school baseball league. We're talking about the discrete, desktop GPU market, which is one where the performance differences between competing products are so small that individual differences get magnified beyond all reasonable recognition. Even if Intel's first desktop GPUs fall behind AMD/NVidia's worst products by 20% in performance per watt, they won't be considered to be competitive just because of how tight the performance margins of this market are.
Intel has a LOT of money to throw at it, though. I figure they'll catch up in 4-5 years. The problem is, it's Intel. It'll be FAR more expensive than AMD or nVIDIA.

Apple and Huawei did the same thing in ARM and Huawei has mostly caught up and Apple is way ahead of everyone.
 

Solarstrike

Gold Member
This is excellent. The more competition, the better. If the price is reasonable and much lower than the competition, Intel should hasten it's way back to the top of PC gaming. They should purchase SEGA to take a step in the gaming market. They hold many popular PC IP's.
 
Intel has a LOT of money to throw at it, though. I figure they'll catch up in 4-5 years. The problem is, it's Intel. It'll be FAR more expensive than AMD or nVIDIA.

Apple and Huawei did the same thing in ARM and Huawei has mostly caught up and Apple is way ahead of everyone.

I don't disagree.

I agree with you completely.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Every company does this.
Yes because they own and operate their own fabs... apples to oranges.
And yet despite this, they were stuck on 10nm for how long? Meanwhile, TSMC has been eating their breakfast, lunch and dinner.
You're an optimist... we'll see.

This chip is being build by TSMC on the node after the RX6000 node, but you think it will be less efficient?
 

Xyphie

Member
Likely Arc will be most popular in laptops/OEM builds initially, I think Intel will push CPU+GPU+Chipset+Wifi+etc bundling very agressively to ASUS, MSI, Dell etc as they are the only company that can offer a full stack there and have tons of inertia. People really overestimate the size of the DIY market, the reason the GTX 1060 is so popular is because of laptops.
 

Excess

Member
Well that's because they're new to this domain and no amount of simply throwing money at the problem will allow them the ability to leapfrog the literal decades of high-end GPU design experience, expertise and patented know-how of NVidia and AMD, to magically produce a superior product on the first try.
>be Nvidia Engineer
>Intel poaches said engineer with job offer

This is how industries work. The difference is, despite your claim, how much money they've appropriated to the overall business strategy. I can guarantee you that AMD's GPU department budget is a tiny fraction of Nvidia's.
 

FireFly

Member
Name a single Intel produced discrete desktop GPU with comparable performance to AMD and NVidia's best cards?

What?... You can't?... Well that's because they're new to this domain and no amount of simply throwing money at the problem will allow them the ability to leapfrog the literal decades of high-end GPU design experience, expertise and patented know-how of NVidia and AMD, to magically produce a superior product on the first try.

Heck, it's taken AMD how long to even catch up to the gaming performance and performance per watt of NVidia's GPUs. What makes you think Intel will be able to be competitive on their first attempt. You need to temper your expectations.

We're not talking about a pre-school baseball league. We're talking about the discrete, desktop GPU market, which is one where the performance differences between competing products are so small that individual differences get magnified beyond all reasonable recognition. Even if Intel's first desktop GPUs fall behind AMD/NVidia's worst products by 20% in performance per watt, they won't be considered to be competitive just because of how tight the performance margins of this market are.
Intel is already competitive with AMD from a performance per watt perspective in the iGPU space, albeit with the caveat that they are up against an optimised Vega design. They're promising a further 50% performance per watt boost with Arc, which could put them close to RDNA 2 (depending on how the Vega iGPUs compare to Navi 1).
 
This chip is being build by TSMC on the node after the RX6000 node, but you think it will be less efficient?

Where did I say that?

You have a gift for arguing strawmen and logical fallacy.

Edit: in addition, GPU overall perf/watt isn't just a function of process technology. That's why Nvidia GPUs on bigger process nodes routinely bested AMDs efforts on more advanced processes. Microarchitectural design is arguably much more important for efficiency. Not that this was ever a point I was even arguing.
 
Last edited:
>be Nvidia Engineer
>Intel poaches said engineer with job offer

This is how industries work. The difference is, despite your claim,

What claim?

how much money they've appropriated to the overall business strategy. I can guarantee you that AMD's GPU department budget is a tiny fraction of Nvidia's.

And?!?

Intel is already competitive with AMD from a performance per watt perspective in the iGPU space, albeit with the caveat that they are up against an optimised Vega design. They're promising a further 50% performance per watt boost with Arc, which could put them close to RDNA 2 (depending on how the Vega iGPUs compare to Navi 1).

iGPU =/= dGPU

GPU microarchetectures aren't infinitely scalable, as clearly illustrated by the increasingly godawful performance/watt of Vega GPUs as they increased in size---fab process be damned.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Where did I say that?

You have a gift for arguing strawmen and logical fallacy.

Edit: in addition, GPU overall perf/watt isn't just a function of process technology. That's why Nvidia GPUs on bigger process nodes routinely bested AMDs efforts on more advanced processes. Microarchitectural design is arguably much more important for efficiency. Not that this was ever a point I was even arguing.

Pretty much the whole post below is you talking about how Intel is unlikely to meet Nvidia/AMDs efficiency.

Note: Perf per watt is a measure of efficiency

Heck, it's taken AMD how long to even catch up to the gaming performance and performance per watt of NVidia's GPUs. What makes you think Intel will be able to be competitive on their first attempt. You need to temper your expectations.

We're not talking about a pre-school baseball league. We're talking about the discrete, desktop GPU market, which is one where the performance differences between competing products are so small that individual differences get magnified beyond all reasonable recognition. Even if Intel's first desktop GPUs fall behind AMD/NVidia's worst products by 20% in performance per watt, they won't be considered to be competitive just because of how tight the performance margins of this market are.
 

Rikkori

Member
RDNA 3 (& Lovelace to a lesser extent) are going to make this look like integrated graphics, lmao. Y'all just don't understand how far behind Intel really is and what MCMs look like.
 

Elias

Member
RDNA 3 (& Lovelace to a lesser extent) are going to make this look like integrated graphics, lmao. Y'all just don't understand how far behind Intel really is and what MCMs look like.
People WANT gpus in the low to mid range, that's what people are expecting from Intel on theirfirst go. Also, the point of contention is that Intel already has a better feature set than AMD cards
 
Last edited:

FireFly

Member
GPU microarchetectures aren't infinitely scalable, as clearly illustrated by the increasingly godawful performance/watt of Vega GPUs as they increased in size---fab process be damned.
We don't need to speculate about scalability, since Intel specifically told us that Arc's performance per watt is 50% greater than Tiger Lake's.


That should put it somewhere between RDNA 1 and RDNA 2.

(Vega was only ever scaled down, not up)
 
We don't need to speculate about scalability, since Intel specifically told us that Arc's performance per watt is 50% greater than Tiger Lake's.


That should put it somewhere between RDNA 1 and RDNA 2.

(Vega was only ever scaled down, not up)

Vega was the final iteration of the GCN architecture and was the scaled-up final form of GCN.

Not seeing anything about the scalability of Intel's GPU microarchitecture in that link. Perf per watt claims =/= proof of a scalable GPU microarchitecture.
 

FireFly

Member
Vega was the final iteration of the GCN architecture and was the scaled-up final form of GCN.

Not seeing anything about the scalability of Intel's GPU microarchitecture in that link. Perf per watt claims =/= proof of a scalable GPU microarchitecture.
Even if you include GCN parts, Vega has slightly better performance per watt than RX570/RX580.


So there was no performance per watt regression.

And I am talking specifically about the ability to scale up performance per watt to a large design, since that was one of the key elements you doubted Intel could compete in. Well, 50% performance per watt on top of Tiger Lake should be sufficient to be competitive. Even 2080 Ti efficiency levels would be ok I think, in the current market conditions. (Consumers aren't going to buy a <$500 3070 competitor if it needs 50W more power?) And that's a card that will be 3.5 years old and ~1.5 process nodes behind by the time Arc launches.

That just leaves performance per mm^2 scaling. And the chip has been estimated to be 396mm2 (https://wccftech.com/intel-dg2-512-...ed-bigger-than-nvidia-ampere-amd-rnda-2-gpus/), which is almost exactly the same size as the 3070 and 18% bigger than the 6700 XT.
 

SlimySnake

Flashless at the Golden Globes
What are you blabbering on about? Are you saying Intel aren't capable enough?
They are a massive company with plenty of resources and cash. They will add competition to the market. I expect them to do well and get a foot into the doorway. Then with the next release they will improve. Just them entering the market will put pressure on Nvidia and AMD to do better.
I just bought an i7-11700k. It runs hotter, consumes more power and offers worse performance than the equivalent ryzen CPU. Their 11th gen CPUs are a goddamn embarrassment.

I like the idea of them entering the market since I agree that it will put pressure on nvidia but Intel is having a rough go of it lately and if their CPU lineup is any indication, they are clearly not capable enough.

Hell, they are awful when compared to their own products from a few years ago. Look at the temps and power usage of the 8700k and 9700k and the latest 11700k processors. 100% more power usage. 20-50% higher temps. For just a 20% boost in performance. And making CPUs is their business. Dont expect much from their GPUs.

69JX8l9.jpg


j07TY3B.jpg


MSq5STU.jpg


XrvhHrJ.jpg
 
Even if you include GCN parts, Vega has slightly better performance per watt than RX570/RX580.


So there was no performance per watt regression.

And I am talking specifically about the ability to scale up performance per watt to a large design, since that was one of the key elements you doubted Intel could compete in. Well, 50% performance per watt on top of Tiger Lake should be sufficient to be competitive. Even 2080 Ti efficiency levels would be ok I think, in the current market conditions. (Consumers aren't going to buy a <$500 3070 competitor if it needs 50W more power?) And that's a card that will be 3.5 years old and ~1.5 process nodes behind by the time Arc launches.

That just leaves performance per mm^2 scaling. And the chip has been estimated to be 396mm2 (https://wccftech.com/intel-dg2-512-...ed-bigger-than-nvidia-ampere-amd-rnda-2-gpus/), which is almost exactly the same size as the 3070 and 18% bigger than the 6700 XT.

Well, I guess we'll see where it lands in practice.

Regardless of any perf per watt promises by Intel, I'm doubtful they can go from an efficient iGPU to a massive 396mm² dedicated GPU and not lose some efficiency. You're routing signals around a physically larger chip. Of course, they won't just copy-paste the iGPU multiple times and call it a day, they'll make further optimisations in the microarchitecture to improve power efficiency, but I'm doubtful they'll achieve equal or better performance to AMD and NVidia right out of the gate.

I would prefer to be wrong. But call me a sceptic. No matter how many engineers Intel might have poached, it's new ground to tread for Intel as a company.
 

John Wick

Member
I just bought an i7-11700k. It runs hotter, consumes more power and offers worse performance than the equivalent ryzen CPU. Their 11th gen CPUs are a goddamn embarrassment.

I like the idea of them entering the market since I agree that it will put pressure on nvidia but Intel is having a rough go of it lately and if their CPU lineup is any indication, they are clearly not capable enough.

Hell, they are awful when compared to their own products from a few years ago. Look at the temps and power usage of the 8700k and 9700k and the latest 11700k processors. 100% more power usage. 20-50% higher temps. For just a 20% boost in performance. And making CPUs is their business. Dont expect much from their GPUs.

69JX8l9.jpg


j07TY3B.jpg


MSq5STU.jpg


XrvhHrJ.jpg
That's exactly how AMD were like before Ryzen. Intel got lazy being so dominant. They now have to go back to the drawing board and develop a new better CPU architecture. Which they are more than capable of. Competition drives innovation and brings prices down. Intel won't be in the dumps too long. As for GPU Intel just need a foot in then the sky's the limit
 

Armorian

Banned
That's exactly how AMD were like before Ryzen. Intel got lazy being so dominant. They now have to go back to the drawing board and develop a new better CPU architecture. Which they are more than capable of. Competition drives innovation and brings prices down. Intel won't be in the dumps too long. As for GPU Intel just need a foot in then the sky's the limit

Based on the rumors Intel will easily beat AMD in gaming with 12xx series, finally 10nm consumer CPUs.
 

tusharngf

Member

Intel’s ARC Alchemist Graphics Card Rumors Point To Three GPUs Aiming High-End & Entry-Level Gaming Market, Top Die Close To RTX 3070 Ti​




Intel-ARC-Alchemist-Gaming-Graphics-Card-GPU-Configurations-Rumor-_1.png


Intel-ARC-Alchemist-Gaming-Graphics-Card-GPU-Configurations-Rumor-_2-1480x569.png




Intel ARC Alchemist vs NVIDIA GA104 & AMD Navi 22 GPUs​

GPU NameAlchemist DG-512NVIDIA GA104AMD Navi 22
ArchitectureXe-HPGAmpereRDNA 2
Process NodeTSMC 6nmSamsung 8nmTSMC 7nm
Flagship ProductARC (TBA)GeForce RTX 3070 TiRadeon RX 6700 XT
Raster Engine862
FP32 Cores32 Xe Cores48 SM Units40 Compute Units
FP32 Units409661442560
FP32 Compute~16 TFLOPs21.7 TFLOPs12.4 TFLOPs
TMUs256192160
ROPs1289664
RT Cores32 RT Units48 RT Cores (V2)40 RA Units
Tensor Cores512 XMX Cores192 Tensor Cores (V3)N/A
Tensor Compute~131 TFLOPs FP16
~262 TOPs INT8
87 TFLOPs FP16
174 TOPs INT8
25 TFLOPs FP16
50 TOPs INT8
L2 CacheTBA4 MB3 MB
Additional Cache16 MB Smart Cache?N/A96 MB Infinity Cache
Memory Bus256-bit256-bit192-bit
Memory Capacity16 GB GDDR68 GB GDDR6X16 GB GDDR6
LaunchQ1 2022Q2 2021Q1 2021
Source: Intel's ARC Alchemist Graphics Card Rumors Point To Three GPUs Aiming High-End & Entry-Level Gaming Market, Top Die Close To RTX 3070 Ti (wccftech.com)
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not

Intel’s ARC Alchemist Graphics Card Rumors Point To Three GPUs Aiming High-End & Entry-Level Gaming Market, Top Die Close To RTX 3070 Ti​




Intel-ARC-Alchemist-Gaming-Graphics-Card-GPU-Configurations-Rumor-_1.png


Intel-ARC-Alchemist-Gaming-Graphics-Card-GPU-Configurations-Rumor-_2-1480x569.png




Intel ARC Alchemist vs NVIDIA GA104 & AMD Navi 22 GPUs​

GPU NameAlchemist DG-512NVIDIA GA104AMD Navi 22
ArchitectureXe-HPGAmpereRDNA 2
Process NodeTSMC 6nmSamsung 8nmTSMC 7nm
Flagship ProductARC (TBA)GeForce RTX 3070 TiRadeon RX 6700 XT
Raster Engine862
FP32 Cores32 Xe Cores48 SM Units40 Compute Units
FP32 Units409661442560
FP32 Compute~16 TFLOPs21.7 TFLOPs12.4 TFLOPs
TMUs256192160
ROPs1289664
RT Cores32 RT Units48 RT Cores (V2)40 RA Units
Tensor Cores512 XMX Cores192 Tensor Cores (V3)N/A
Tensor Compute~131 TFLOPs FP16
~262 TOPs INT8
87 TFLOPs FP16
174 TOPs INT8
25 TFLOPs FP16
50 TOPs INT8
L2 CacheTBA4 MB3 MB
Additional Cache16 MB Smart Cache?N/A96 MB Infinity Cache
Memory Bus256-bit256-bit192-bit
Memory Capacity16 GB GDDR68 GB GDDR6X16 GB GDDR6
LaunchQ1 2022Q2 2021Q1 2021
Source: Intel's ARC Alchemist Graphics Card Rumors Point To Three GPUs Aiming High-End & Entry-Level Gaming Market, Top Die Close To RTX 3070 Ti (wccftech.com)

Correctly priced this thing is a real contender.
 
Top Bottom