• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Will ML be a differentiator between the consoles?

RaySoft

Member
DirectML short for Direct Machine Learning is AMD equivalent answer to Nvidia's DLSS short for Deep Learning Super Sampling, Enabling higher performance levels by using machine learning to upscale images to higher resolutions without visual downgrades.
AMD has an answer to Nvidia's DLSS, and that answer is DirectML-powered Super Resolution. AMD plans to utilise Machine Learning to improve the visual quality of games, & AMD's solution will have the backing of Microsoft.
U9fvcRd.jpg

Microsoft confirmed that both of their next-generation consoles would support Machine learning for games with DirectML. Through their collaboration with AMD when creating their new consoles.
A component of Microsoft's DirectX feature set DirectML isn't a Radeon-only technology, and its applications extend far beyond Super Resolution functions. Over the coming years, future PC and Xbox games will bring Machine Learning into games in several new and innovative ways, impacting all gamers with supported hardware.
BTt8e9g.jpg

Microsoft has already showcased the potential of machine learning in gaming applications, with the image below showcasing what happens when Machine Learning is used to upscale an image to four times its original resolution (basically from 1080p to 4K) to generate a sharper final image with reduced aliasing. The image below is a comparison between ML Super Sampling and bilinear upsampling.
siNpX8v.png

PS5 is confirmed to not use DirectX feature so even though AMD made their CPU and GPU for the PS5 it looks like PS5 will not have DirectML
so will Sony come up with alternative method to compensate for not having this feature cause Machine Learning is the future of gaming.
While DirectML hasn't received as much attention as DirectX raytracing, you can be sure that developers are looking at the new API closely. As screen manufacturers are starting to push beyond 4K, AI upscaling technologies like Machine Learning will continue to increase in popularity. 4K gaming is already a challenge for modern gaming hardware, and 8K is going to prove to be even more problematic for game makers and hardware vendors.
Technologies like DirectML will become vital for future games, both on PC and on consoles. The application of machine learning will allow developers to deliver higher levels of graphical fidelity without the insane hardware costs of traditional computational methods.
To my (limited) knowledge the INT4, INT8, FP16 and FP32 types are incorporated into the RDNA2 shaders themselves. Remember datatypes could be used on anything from textures to compute/ML (integers are mostly used for ML since it doesn't need the precision of floats), so this setup makes it quite dynamic and powerfull for devs to use them as they see fit.
So the "ML" part isn't a block of dedicated transistors (i.e. like the decompression hardware in the new consoles), so this boils down to the fact if Sony has the same RDNA2 CU's in their PS5 or not.
If we are going by what Cerny has said before (Road To PS5), they do. And frankly the thought of Sony customizing every CU sounds both expensive and a bit unrealistic.
The PS5 don't have DirectML ofc since that's a part of the Direct-X suite, but as long as the hardware are present they would have no problems utilizing ML with either a 3rd party api or their own solution.
 

rnlval

Member
No. Thats not it, the subject is COMPUTE UNITS , nothing to do with memory bandwidth from PHY to memory compensation..

You going to be talking about controllers next lol
Hint: XSX GPU is about 155 watts for 12 TFLOPS and RX 6900 XT has about 24 TFLOPS for 300 watts. LOL

RX 6900 XT's ~24 TFLOPS lands on 300 watts as expected from XSX GPU's 12 TFLOPS with 155 watts scaling.
 

geordiemp

Member
Fine-grain clock gating = power saving. XSX GPU delivers superior perf/watt when compared to RX 5700 XT! i.e. ~209 watts with 45 watts level 8 cores Zen 2 CPU at 3.6 to 3.8 Ghz clock speed.

No.

pervasive means spreading widely throughout an area , fine grain clock gating is what it says.

Its telling you the variable frequency control of CU in RDNA2 is both variable and is spread throughout the silicon, likely per DCU unit. So the logical inference is different DCU can operate at different frequencies.

Lets take the last point, which was touched on in RDNA1 white paper,. redesigned data paths for efficient movement, RDNA1 white paper talks about looking to reduce the silicon distance of data to processing location on silicon.

You willnote all Big NAVI and pC variants have a L2 closer, and L1 feeds no more than 10 CU. Its to do with efficient and fast clocks and propagation delay.

Thats why RDNA2 shader arrays are <= 10 CU.
 

Elias

Member
To my (limited) knowledge the INT4, INT8, FP16 and FP32 types are incorporated into the RDNA2 shaders themselves. Remember datatypes could be used on anything from textures to compute/ML (integers are mostly used for ML since it doesn't need the precision of floats), so this setup makes it quite dynamic and powerfull for devs to use them as they see fit.
So the "ML" part isn't a block of dedicated transistors (i.e. like the decompression hardware in the new consoles), so this boils down to the fact if Sony has the same RDNA2 CU's in their PS5 or not.
If we are going by what Cerny has said before (Road To PS5), they do. And frankly the thought of Sony customizing every CU sounds both expensive and a bit unrealistic.
The PS5 don't have DirectML ofc since that's a part of the Direct-X suite, but as long as the hardware are present they would have no problems utilizing ML with either a 3rd party api or their own solution.
nothing suggests the ps5 has the hardware though. MS has stated ml silicon added to the seriesx/s is their own customization, not something present in all rdna 2 hardware.
 

rnlval

Member
No.

pervasive means spreading widely throughout an area , fine grain clock gating is what it says.

Its telling you the variable frequency control of CU in RDNA2 is both variable and is spread throughout the silicon, likely per DCU unit. So the logical inference is different DCU can operate at different frequencies.

Lets take the last point, which was touched on in RDNA1 white paper,. redesigned data paths for efficient movement, RDNA1 white paper talks about looking to reduce the silicon distance of data to processing location on silicon.

You willnote all Big NAVI and pC variants have a L2 closer, and L1 feeds no more than 10 CU. Its to do with efficient and fast clocks and propagation delay.

Thats why RDNA2 shader arrays are <= 10 CU.
Your argument is a red herring when XSX GPU has similar perf/watt as RX 6900 XT. LOL.
 

RaySoft

Member
nothing suggests the ps5 has the hardware though. MS has stated ml silicon added to the seriesx/s is their own customization, not something present in all rdna 2 hardware.
Yes that could be the case. I hope we'll find out sooner rather than leter...
 

geordiemp

Member
Your argument is a red herring when XSX GPU has similar perf/watt as RX 6900 XT. LOL.

So you refuse to read fine gated frequency control that AMD explained is in RDNA2 CU

Dont worry, you cant avoid it, RDNA2 white paper will talk about it in depth so its not going away

Your comparing perf per watt of a console APU vs a graphics card lol, whatever, and 6900 lol, so funny

Lets see how XSX stacks up against ps5 first as XSX is a 20 % bigger die
 
Last edited:

rnlval

Member
No.

pervasive means spreading widely throughout an area , fine grain clock gating is what it says.

Its telling you the variable frequency control of CU in RDNA2 is both variable and is spread throughout the silicon, likely per DCU unit. So the logical inference is different DCU can operate at different frequencies.

Lets take the last point, which was touched on in RDNA1 white paper,. redesigned data paths for efficient movement, RDNA1 white paper talks about looking to reduce the silicon distance of data to processing location on silicon.

You willnote all Big NAVI and pC variants have a L2 closer, and L1 feeds no more than 10 CU. Its to do with efficient and fast clocks and propagation delay.

Thats why RDNA2 shader arrays are <= 10 CU.
For the PC, RDNA 2X includes NAVI 21, 22, and 23 SKUs. BIG NAVI (aka NAVI 21) is one of many RDNA 2X PC GPU SKUs. Expect a scaled-down mobile RDNA 2 GPU with a 128-bit GDDR6 bus.
 

DaGwaphics

Member
It is my understanding that the chip used in the XSX incorporates RDNA2 heavily but also borrows some pieces from CDNA 2 (AMDs GPU line targeting machine learning and servers). Is the integer acceleration part of RDNA2 or CDNA2? I never saw this specifically clarified in AMD's presentation (but I might have missed it).
 

rnlval

Member
So you refuse to read fine gated frequency control that AMD explained is in RDNA2 CU

Dont worry, you cant avoid it, RDNA2 white paper will talk about it in depth so its not going away

LOL 6900, your delusional, lets see how XSX stacks up against ps5 first as XSX is a 20 % bigger die
Your technobabble is a red herring for the hardware feature set within the GPU.
 

geordiemp

Member
For the PC, RDNA 2X includes NAVI 21, 22, and 23 SKUs. BIG NAVI (aka NAVI 21) is one of many RDNA 2X PC GPU SKUs. Expect a scaled-down mobile RDNA 2 GPU with a 128-bit GDDR6 bus.

All with fine gated freqency control and less than 10 CU per shader array.

Your technobabble is a red herring for the hardware feature set within the GPU.

So the AMD slide is technobable now ? You should inform AMD of your expert analysis

W1BLdIj.png


Also aligns with the RDNA2 leakers from a while ago

 
Last edited:

rnlval

Member
It is my understanding that the chip used in the XSX incorporates RDNA2 heavily but also borrows some pieces from CDNA 2 (AMDs GPU line targeting machine learning and servers). Is the integer acceleration part of RDNA2 or CDNA2? I never saw this specifically clarified in AMD's presentation (but I might have missed it).
CDNA has discrete Tensor cores along with proper FP64 capable CU design.
 

rnlval

Member
All with fine gated freqency control and less than 10 CU per shader array.

So teh AMD slide is technobable now ?
BiG NAVI (NAVI 21) has up to 8 Shader Engines with 128 ROPS backed by 128 MB L3 cache.

Lesser RDNA 2X ASICs that will replace RX 5500, 5600XT, 5700, 5700 XT SKUs will have a mix of Shader Engine vs CU count ratio which depends on AMD's performance and price targets.

Xbox Series S is 128 bus GDDR6 RDNA 2X 22 CU scaled implementation with common DirectX12 Ultimate (DirectX12 Feature Level 12_2) feature set support as it's large-scaled XSX GPU and BiG NAVI (NAVI 21).
 

geordiemp

Member
BiG NAVI (NAVI 21) has up to 8 Shader Engines with 128 ROPS backed by 128 MB L3 cache.

Lesser RDNA 2X ASICs that will replace RX 5500, 5600XT, 5700, 5700 XT SKUs will have a mix of Shader Engine vs CU count ratio which depends on AMD's performance and price targets.

Xbox Series S is 128 bus GDDR6 RDNA 2X 22 CU scaled implementation with common DirectX12 Ultimate (DirectX12 Feature Level 12_2) feature set support as it's large-scaled XSX GPU and BiG NAVI (NAVI 21).

Less than 10 CU per shader array. Each and every one, and RDNA2 also has fine gated frequency control which will be detailed in the AMD white paper.

Stick around.
 
Last edited:

rnlval

Member
All with fine gated freqency control and less than 10 CU per shader array.

So the AMD slide is technobable now ? You should inform AMD of your expert analysis

Also aligns with the RDNA2 leakers from a while ago


NAVI 10 does NOT support the optional NAVI INT8 dot4 and INT4 dot8.
 

rnlval

Member
[/QUOTE]
Less than 10 CU per shader array. Each and every one, and RDNA2 also has fine gated frequency control which will be detailed in the AMD white paper.

Stick around.
That's a red herring argument when XSX GPU delivered ~12.1 TFLOPS with 155 watts and RX 6900 XT's ~24 TFLOPS with 300 watts.
 

Plantoid

Member
If only xbox had good first party... It would be the end for sony, they nail everything, but they're missing the games...
 

rnlval

Member
UMMMMM OKAY. WOW. Ps5 doesn't have Microsoft software. HOLY 💩



5AZnV9w.jpg
 
Last edited:

Thirty7ven

Banned


5AZnV9w.jpg

RDNA2 is Navi based too. You were comparing the XSX to big Navi just above.

Some of you are hopeless.
 

rnlval

Member
RDNA2 is Navi based too. You were comparing the XSX to big Navi just above.

Some of you are hopeless.
BiG NAVI is just NAVI 21 which has 8 shader engines, 80 CU(40 DCU), 128 ROPS, and 128 MB L3 cache implementation. Expect lesser RDNA 2 (NAVI 2X) SKUs at lower performance and price points.
 

geordiemp

Member

That's a red herring argument when XSX GPU delivered ~12.1 TFLOPS with 155 watts and RX 6900 XT's ~24 TFLOPS with 300 watts.
[/QUOTE]

OK, Iam talking RDnA2 CU architecture and your talking Terraflops now like a game jounalist.

You really dont get it, so you just have to wait until you can read RDNA2 white paper. Not long now anyway as its usually around release.
 
Last edited:

rnlval

Member
OK, Iam talking RDnA2 CU architecture and your talking Terraflops now like a game jounalist.

You really dont get it, so you just have to wait until you can read RDNA2 white paper. Not long now anyway as its usually around release.
Red herring, CU FLOPS perf/watt between XSX and RX 6900 XT is similar.
 
Last edited:

Thirty7ven

Banned
BiG NAVI is just NAVI 21 which has 8 shader engines, 80 CU(40 DCU), 128 ROPS, and 128 MB L3 cache implementation. Expect lesser RDNA 2 (NAVI 2X) SKUs at lower performance and price points.

Point being RDNA2 is Navi based. You really are clueless.

The ML tech he was talking about is specific Cores like tensor cores. That’s why he made it a point of mentioning a unit specific to audio processing.
 

rnlval

Member
Point being RDNA2 is Navi based. You really are clueless.

The ML tech he was talking about is specific Cores like tensor cores. That’s why he made it a point of mentioning a unit specific to audio processing.
Prove I didn't know RDNA 2 is NOT NAVI based.
 

DaGwaphics

Member
rnlval rnlval I've discovered the Mark Twain philosophy is best used when engaging some users. :messenger_tears_of_joy:

We'll see what the rest of the RDNA2 stack looks like. So far, the XSX SoC appears comparable in efficiency. Similar sustained clocks considering the power envelope in play and the inclusion of the Zen 2 CPU on the XSX chip, etc. Missing the GPU L3, but that was expecting considering the transistor increase that would be required for that. Looks like MS built a system targeting a max system power draw of 250w or so, and did an amazing job balancing everything to get there.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Saying something has ML or not is like saying it has physics or image processing. It is a class of algorithms. Algorithms are software and has become mainstream due to the massive parallel processing in GPUs. Both the PS5 and XBX have almost identical AMD HW. Draw your own conclusions.
 

rnlval

Member
Saying something has ML or not is like saying it has physics or image processing. It is a class of algorithms. Algorithms are software and has become mainstream due to the massive parallel processing in GPUs. Both the PS5 and XBX have almost identical AMD HW. Draw your own conclusions.
Source 1
From https://wccftech.com/xbox-series-xs...ning-powered-shader-cores-says-quantic-dream/

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI).

----


Source 2
From https://www.neogaf.com/threads/sony...update-read-op.1556484/page-10#post-259359528

aZ4F2WF.jpg


ML is optional with NAVI.

Draw your own conclusions.
 
Last edited:

longdi

Banned
Source 1
From https://wccftech.com/xbox-series-xs...ning-powered-shader-cores-says-quantic-dream/

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI).

----


Source 2

aZ4F2WF.jpg


ML is optional with NAVI.

Yep, If Mark Cerny didnt talk about ML, it most likely wont have ML hardware. Let it go friends.

What im surprised is why PS5 do not support VRR out of the gates? Someting stink

Rdna1 navi issues from the past.

Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.
 

Neo_game

Member
No, I do not think it is going to matter. Microsoft said SX INT4 givers over 97 TOPS. According to Nvidia RTX 2080ti gives 455.4 TOPS so that is 5times faster. Actually even 2060 is twice as fast as SX so I guess that is too slow and nothing substantial to make any meaningful difference. Whatever it is we will know in some years but I do not think it is anything to be excited about
 

geordiemp

Member
No, I do not think it is going to matter. Microsoft said SX INT4 givers over 97 TOPS. According to Nvidia RTX 2080ti gives 455.4 TOPS so that is 5times faster. Actually even 2060 is twice as fast as SX so I guess that is too slow and nothing substantial to make any meaningful difference. Whatever it is we will know in some years but I do not think it is anything to be excited about

And whats the Tfops on the 6800 XT ?
 
Last edited:

rnlval

Member
No, I do not think it is going to matter. Microsoft said SX INT4 givers over 97 TOPS. According to Nvidia RTX 2080ti gives 455.4 TOPS so that is 5times faster. Actually even 2060 is twice as fast as SX so I guess that is too slow and nothing substantial to make any meaningful difference. Whatever it is we will know in some years but I do not think it is anything to be excited about
The limitation is external memory bandwidth and texture cache.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Source 1
From https://wccftech.com/xbox-series-xs...ning-powered-shader-cores-says-quantic-dream/

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI).

----


Source 2
From https://www.neogaf.com/threads/sony...update-read-op.1556484/page-10#post-259359528

ML is optional with NAVI.

Draw your own conclusions.

Confirmation bias at work. Please let us know how the shader cores differ. Sorry if I don't trust technical talk from a CEO. It's not like AMD has dedicated ML hardware.
 
Last edited:

geordiemp

Member
The limitation is external memory bandwidth and texture cache.

Its like your having a different converstaion to everybody else, he was talking about INT difference between XSX and Nvidia, and I replied what is the INT TfOPs of 6800...

And you start talking about a totally different subject, what on earth are you talking about ? Do you know ?

Its like 2 people talking about football, and you say it takes 2 minutes to fry an egg.
 
Last edited:

rnlval

Member
Confirmation bias at work. Please let us know how the shader cores differ. Sorry if I don't trust technical talk from a CEO. It's not like AMD has dedicated ML hardware.
This topic not about NVIDIA's and CDNA's discrete Tensor cores.


scV8pjf.png
 

yurinka

Member
My source is AMD DirectML is using DirectX feature which the PS5 does not use since it is Microsoft technology did you even read the posting now show me where it shows that PS5 have AMD DirectML?
As always, DirectML is the DirectX (Microsoft's) name of the technique. This doesn't mean they will be the only ones to do this.

As always, Microsoft will do their own implementation and others (Open GL, Sony, etc) will do their own implementation, which will have another name and will be maybe a bit different but will do essentially the same.
 

rnlval

Member
Its like your having a different converstaion to everybody else, he was talking about INT difference between XSX and Nvidia, and I replied what is the INT TfOPs of 6800...

And you start talking about a totally different subject, what on earth are you talking about ? Do you know ?

Its like 2 people talking about football, and you say it takes 2 minutes to fry an egg.
For integer per second = TIOPS or TOPS
For floating-point per second = TFLOPS

AMD hasn't fully revealed "BiG NAVI"'s DCU (dual CU) capabilities.
 

Thirty7ven

Banned
Prove I didn't know RDNA 2 is NOT NAVI based.

You mean proof you didn’t know that RDNA2 is Navi based right?

Anyway you are using a tweet that claims that because something is based on Navi it doesn’t have ML. Same tweet references a unit dedicated to audio. So clearly he’s comparing to dedicated units like tensor cores.

Having int4 and int8 support does not mean you suddenly have dedicated blocks to ML ops.
 
Last edited:

rnlval

Member
Its like your having a different converstaion to everybody else, he was talking about INT difference between XSX and Nvidia, and I replied what is the INT TfOPs of 6800...

And you start talking about a totally different subject, what on earth are you talking about ? Do you know ?

Its like 2 people talking about football, and you say it takes 2 minutes to fry an egg.
Any TIOPS or TFLOPS debate should factor in memory bandwidth bound. Serious ML workload is with NVIDIA's A100 backed by 1.6 (TB/sec) of memory bandwidth.
 

rnlval

Member
1. You mean proof you didn’t know that RDNA2 is Navi based right?

2 Anyway you are using a tweet that claims that because something is based on Navi it doesn’t have ML. Same tweet references a unit dedicated to audio. So clearly he’s comparing to dedicated units like tensor cores.

3. Having int4 and int8 support does not mean you suddenly have dedicated blocks to ML ops.
1. Yep, prove it

2. Under NAVI, dot4, and dot8 are optional features.

3. Who cares as long it can divide existing 32bit ALU for dot4 and dot8 workloads. Having a separate Tensor block also increases latency and any TIOPS/TFLOPS theoretical debate is useless without factoring memory bandwidth bound issues.
 

Thirty7ven

Banned
1. Yep, prove it

2. Under NAVI, dot4, and dot8 are optional features.

3. Who cares as long it can divide existing 32bit ALU for dot4 and dot8 workloads. Having a separate Tensor block also increases latency and any TIOPS/TFLOPS theoretical debate is useless without factoring memory bandwidth bound issues.

You can’t prove PS5 doesn’t have int4/int8 support. That’s all.

But whatever man, take it home and bake a cake.
 

rnlval

Member
As always, DirectML is the DirectX (Microsoft's) name of the technique. This doesn't mean they will be the only ones to do this.

As always, Microsoft will do their own implementation and others (Open GL, Sony, etc) will do their own implementation, which will have another name and will be maybe a bit different but will do essentially the same.
DirectML can run on RPM double-rate FP16 or even single rate FP32 hardware. The outcome is the performance boost when using smaller datatypes.
 

rnlval

Member
You can’t prove PS5 doesn’t have int4/int8 support. That’s all.

But whatever man, take it home and bake a cake.
Source 1
From https://wccftech.com/xbox-series-xs...ning-powered-shader-cores-says-quantic-dream/

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI).


The context is shader cores hardware MORE suitable to machine learning


Sony responded to the hardware raytracing PR mess i.e. obtaining information from Sony is like extracting water from a dry rock.
 
Last edited:

rnlval

Member
Yeah, that is what I said :).

Yup, but you can continue trying to put a 40 CU RDNA2 based semi-custom design down hehe 😉.
Custom debate from PS4 and PS4 Pro amounts to BS PR relative to R7 265 and RX 470 respectively.

I'm not buying the "custom" debate. I'm not going to fall for it again!
 

Ps5ProFoSho

Member
Sony has been in the console game for a LONG TIME and is the only reason microsoft even jumped into it. MS has been behind the 8ball every time PS has done something and then microsoft ends up catching up. I really don't think MS has some unbelievable technology that PS can't or won't replicate. Who's talking about "secret sauce" now?
 

rnlval

Member
OK, Iam talking RDnA2 CU architecture and your talking Terraflops now like a game jounalist.

You really dont get it, so you just have to wait until you can read RDNA2 white paper. Not long now anyway as its usually around release.
CU is about TFLOPS, TIOPS, TMUs, and RT cores. LOL.

ROPS read/write and Rasterization hardware is outside the CU.
 
Last edited:

rnlval

Member
Sony has been in the console game for a LONG TIME and is the only reason microsoft even jumped into it. MS has been behind the 8ball every time PS has done something and then microsoft ends up catching up. I really don't think MS has some unbelievable technology that PS can't or won't replicate. Who's talking about "secret sauce" now?
1. MSX is a standardized home computer architecture, announced by Microsoft and ASCII Corporation. Sony was one of many manufacturers for MSX.

MSX is MS's attempt to clone Commodore 64 like 8-bit machines with common software standards. Commodore 64 also runs on MS Basic UI.


2. https://en.wikipedia.org/wiki/Amiga_Hombre_chipset
The original plan for the Hombre-based computer system was to have Windows NT compatibility, with native AmigaOS recompiled for the new big-endian CPU to run legacy 68k Amiga software through emulation. Commodore chose the PA-7150 microprocessor over the MIPS R3000 microprocessor and first generation embedded PowerPC microprocessors, mainly because these low-cost microprocessors were unqualified to run Windows NT.

Amiga Hombre was in Sony's PS1 generation.

Sega Dreamcast wasn't the 1st MS's game console partner.
 
Last edited:
No

Int 8 and int 4, you know you can do this on standard hadware cores right ?

Having dedicated int 4 and int 8 vs using larger cores and splitting does not mean software.

And where does it say anywhere super sampling from AMD needs allot of dedicated int capability - its likely more temporal anyway.

Its called reaching.

I don't think it's reaching when it's literally on the fucking hot chips presentation deck Geordie. It doesn't matter anyway. It just annoys me sometimes that you try to speak with such authority on matters you're not an expert in.
 
Top Bottom