• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

Anandtech: Installing Windows on an Xbox One APU: The Chuwi Aerobox Review

LordOfChaos

Member
Mar 31, 2014
11,895
7,305
985

What a weird product. I wonder why AMD decided to sell these APUs to rando companies. It largely does seem to be the Xbox One APU at least in CPU and GPU configuration, minus eSRAM, but the memory speed gimps the graphics, would have been interesting to see definitive tests of GDDR as system memory and its impact on latency.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,939
3,647
725
Stuck in 1Q84.

What a weird product. I wonder why AMD decided to sell these APUs to rando companies. It largely does seem to be the Xbox One APU, but the memory speed gimps the graphics, would have been interesting to see definitive tests of GDDR as system memory and its impact on latency.
Its surplus and bad bins, really.

Anandtech also had some weird changes versus the Aliexpress motherboard that you can buy in the dozens now:
  • The Aerobox has the full 14 CU's that Xbox One S's have (But they bin 2 of those for improved yields) So 896 cores.
  • The Aliexpress motherboards list out 384 cores, or half an Xbox One.
  • Ian's Aerobox has 8 GB DDR3 in dual-channel. That gives it a paltry 14 GB/s. However, The Chip Collective posted a rather excellent review where they point out how the whole APU needs quad channel memory. It also highlighted the performance (1.5 TF). Unfortunately, the APU is extremely memory-specific, so i wouldn't blame Ian for not mucking around with it (He got 8 GB in the package) In the followup from The Chip Collective, additional tips are described.
  • Both TCC and Anandtech note that this APU basically does Xbox One games at 720p. Anandtech proves this by running Borderlands 3 at 720p30 low. By comparison, the Xbox One S version (with 768 cores) runs it at 900p30.
So what is the usecase for an Aerobox?
Well, much in the same way what the use case is for an Atari VCS - Indies. Steam has a ton of indie games and indie shooters that require lowly GPU's (8800 GT, GTS 450, Radeon HD 5670, etc) to run at high resolutions. Both VCS and Aerobox can easily handle that, and older, PS360 generation games.

All lands or stands with the drivers though. You ain't getting a ton of gaming grunt with the Aerobox, but if you treat it the niche this caters to then its a good option. Plus, its unique. It really is an reconfigured Xbox One APU.
 

Jagz

Member
Feb 13, 2018
155
123
280
Indie, 360 era games, PS2 and below emulation, cloud streaming games (Geforce Now, etc), KODI and other video streaming, work/light productivity would be among the main reasons people buy something like this. There's probably much better PC options for a similar/better price, though.
 
  • Like
Reactions: Redneckerz

Stray Parts

Member
Jun 8, 2016
92
125
300
An interesting oddity for sure.

The out of date 2017 AMD GPU driver makes it not very useful, but I guess someone somewhere has a need for this product.

Maybe Linux on it with AMD open source GPU drivers (if support was added) could make it more tolerable.
 

longdi

Ni hao ma, fellow kids?
Jun 7, 2004
8,870
6,569
1,890
looking at the cpu benchmarks, remember Xbox had a faster clocked cpu than PS4.

This gen, Series X, PS5, cpu are going to be 8x faster. :messenger_open_mouth:
Like the SSD, CPU + SSD leaps are so wide from last gen, it is down to the GPU to differentiate.
A faster GPU will take the crown in the next few years. 🤷‍♀️
 

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,939
3,647
725
Stuck in 1Q84.
It does not even have esRAM, that's weird why would they take it off....
Probably because Windows would need to have driver support for that. as ESRAM is not a common memory type, you would need something like what Intel did with its CrystalWell dies - EDRAM - to get something similar. Except that Intel brought native Windows support for it by highlighting it as L4 cache or something.

Such a thing seems a bit too much development work for something such as this.

Besides, it is interesting how you can essentially have this one run at either 1.75 Ghz (as an embedded RX part) or 2.35 Ghz (As A9-9820) which is essentially the same clock speed as the Scorpio Engine inside the One X.
 
  • Like
Reactions: M1chl

RaZoR No1

Member
Jun 17, 2015
551
637
545
why would anybody buy this thing in 2020?
The CPU is too slow and that ram.. oh boy...
even the Xbone X Version would be bad with Windows 10
 
Mar 7, 2017
2,396
5,056
510
Its surplus and bad bins, really.

Anandtech also had some weird changes versus the Aliexpress motherboard that you can buy in the dozens now:
  • The Aerobox has the full 14 CU's that Xbox One S's have (But they bin 2 of those for improved yields) So 896 cores.
  • The Aliexpress motherboards list out 384 cores, or half an Xbox One.
  • Ian's Aerobox has 8 GB DDR3 in dual-channel. That gives it a paltry 14 GB/s. However, The Chip Collective posted a rather excellent review where they point out how the whole APU needs quad channel memory. It also highlighted the performance (1.5 TF). Unfortunately, the APU is extremely memory-specific, so i wouldn't blame Ian for not mucking around with it (He got 8 GB in the package) In the followup from The Chip Collective, additional tips are described.
  • Both TCC and Anandtech note that this APU basically does Xbox One games at 720p. Anandtech proves this by running Borderlands 3 at 720p30 low. By comparison, the Xbox One S version (with 768 cores) runs it at 900p30.
So what is the usecase for an Aerobox?
Well, much in the same way what the use case is for an Atari VCS - Indies. Steam has a ton of indie gaomes and indie shooters that require lowly GPU's (8800 GT, GTS 450, Radeon HD 5670, etc) to run at high resolutions. Both VCS and Aerobox can easily handle that, and older, PS360 generation games.

All lands or stands with the drivers though. You ain't getting a ton of gaming grunt with the Aerobox, but if you treat it the niche this caters to then its a good option. Plus, its unique. It really is an reconfigured Xbox One APU.

How can it be surplus and bad bins of the X1 APU when there exists no eSRAM on the die as well as no SHAPE audio coprocessor, as well as a very different memory controller setup?

If the silicon is different it’s an entirely different design. AMD probably used the X1 APU design as a template and then modified it for this product, because it made the engineering much cheaper and faster. It’s clearly not, however, a manufactured X1 APU chip, otherwise AMD would be paying MS royalties to sell this into other markets.
 
Last edited:

gundalf

Member
Oct 31, 2012
656
145
640
The Xbox Series S|X APU might also be released into the wild the next 2-3 years as the Azure Team plans to use those as part of their Servers (probably Remote Rendering, ML, etc. although not for grunt work like App Services).
 

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,939
3,647
725
Stuck in 1Q84.
How can it be surplus and bad bins of the X1 APU when there exists no eSRAM on the die as well as no SHAPE audio coprocessor, as well as a very different memory controller setup?
The memory controller, correct me if wrong, is not a full part of the APU. And if it is, one can just disable the components that connect to the ESRAM. The latter is after all, an external part like the DDR3 modules are.
If the silicon is different it’s an entirely different design. AMD probably used the X1 APU design as a template and then modified it for this product, because it made the engineering much cheaper and faster. It’s clearly not, however, a manufactured X1 APU chip, otherwise AMD would be paying MS royalties to sell this into other markets.
The serial numbers highlight these APU's were made 2013-2014, which would correspond with the impression that they are, well, reconfigured Xbox One chips.

If we could go by those dates, then which APU's back then had 8 Jaguar cores? The best you could do was a Athlon 5370, which is a Quad Jaguar part. Now, technically you could combine two of those modules (After all, that's how they got the PS4/XBO APU's in the first place) but then you don't have 896 SP's to go along with them.

For it to be a new platform, they would had to have a custom solution based on those two Athlon 5370's and somehow integrate a 896 cores GCN based GPU and be made in 2013. That seems highly improbable.

If it were a new 2020 part, then AMD would be smoking big blunts, as why the heck would you design a new octacore Jaguar part on a manufacturing process that is several years old? (28 nm).

So no, given the evidence presented by both The Tech Collective, WinTab, and Anandtech, it makes more sense calling it what it is: A rebranded and slightly modified Xbox One APU. Slightly modified, since it allows for an One X CPU clock speed and it has two extra CU's enabled (Also present in Xbox One dev units). The only difference is in the quality of the boards. The Aerobox hardware seems to get the full deal, whereas the AliExpress motherboards are often more wonky in terms of functionality.
 
Mar 7, 2017
2,396
5,056
510
The memory controller, correct me if wrong, is not a full part of the APU. And if it is, one can just disable the components that connect to the ESRAM. The latter is after all, an external part like the DDR3 modules are.

The serial numbers highlight these APU's were made 2013-2014, which would correspond with the impression that they are, well, reconfigured Xbox One chips.

If we could go by those dates, then which APU's back then had 8 Jaguar cores? The best you could do was a Athlon 5370, which is a Quad Jaguar part. Now, technically you could combine two of those modules (After all, that's how they got the PS4/XBO APU's in the first place) but then you don't have 896 SP's to go along with them.

For it to be a new platform, they would had to have a custom solution based on those two Athlon 5370's and somehow integrate a 896 cores GCN based GPU and be made in 2013. That seems highly improbable.

If it were a new 2020 part, then AMD would be smoking big blunts, as why the heck would you design a new octacore Jaguar part on a manufacturing process that is several years old? (28 nm).

So no, given the evidence presented by both The Tech Collective, WinTab, and Anandtech, it makes more sense calling it what it is: A rebranded and slightly modified Xbox One APU. Slightly modified, since it allows for an One X CPU clock speed and it has two extra CU's enabled (Also present in Xbox One dev units). The only difference is in the quality of the boards. The Aerobox hardware seems to get the full deal, whereas the AliExpress motherboards are often more wonky in terms of functionality.

No. The memory controller PHYs are indeed on die and take up precious die space. eSRAM is also on-die on the XB1 APU chip, that’s why it’s called “embedded” SRAM, and also takes up considerable die space (it’s why the XB1 GPU has less CUs than the PS4 while having an overall larger die, I.e. 363 mmsq vs 349 mmsq).

The die size for this thing being 360mmsq and not 363mmsq all but confirms it’s an independent product, designed and fabricated in isolation to anything Xbox related.

In addition, you can’t just take an XB1 chip, disable the eSRAM and expect it to work, as on-die communication circuitry is designed specifically for direct GPU to eSRAM comms, because DDR3 on a 128bit bus is much too slow for a GPU of this size. The XB1 GPU was specifically designed to read and write directly to eSRAM more often than not, as although it is able to read from DDR3, doing so will tank performance. So specifying a product like this designed for a desktop PC, where memory reads from main memory need to go through a PCI-E bus, dramatically further increasing latency would absolutely require a complete redesign of the XB1 APU memory bus and interface to avoid it being completely worthless.

Anandtech, WinTab and The Tech Collective are simply wrong. You cannot modify a console APU, post fabrication to be able to run in a standard desktop PC. It simply won’t work.

Fundamentally, the strongest proof this is not the XB1 APU (as opposed to a chip based on the XB1 APU design) is the die sizes are not identical. There’s nothing more that you need to prove this conclusively.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,939
3,647
725
Stuck in 1Q84.
No. The memory controller PHYs are indeed on die and take up precious die space. eSRAM is also on-die on the XB1 APU chip, that’s why it’s called “embedded” SRAM, and also takes up considerable die space (it’s why the XB1 GPU has less CUs than the PS4 while having an overall larger die, I.e. 363 mmsq vs 349 mmsq).

The die size for this thing being 360mmsq and not 363mmsq all but confirms it’s an independent product, designed and fabricated in isolation to anything Xbox related.
Thanks for responding :) I don't read a disapproval of my own theory set forward, what do you say about that? What makes you conclude that the deduction i posted is completely illogical? Your primary source of disagreement seems to be die sizes, but what else?

The difference is 3 mmsq. If you truly feel its an independent product, can you provide evidence for this? Because i have a hard time believing that 3 independent sites have it completely wrong (and it must be a completely unique product).

The XB1 GPU was specifically designed to read and write directly to eSRAM more often than not, as although it is able to read from DDR3, doing so will tank performance.
But 32 MB ESRAM is practically nothing for any texture read/writes. Like Xenos, it is used as a seperate buffer to enable free anti aliasing and rapid fetch. But you need the 8 GB DDR3 aswell.

If it purely used ESRAM as GPU memory, you would have a GPU with the same memory size as a Nvidia TNT2 from 1999.

For this APU, they can set in the BIOS how much UMA memory it needs to access. Standard it uess 512 MB, but it can be set to 2 GB DDR3. This is a common APU parameter - The Ryzen G series also allow up to 2 GB UMA framebuffer to be used as video memory.
 
Mar 7, 2017
2,396
5,056
510
Thanks for responding :) I don't read a disapproval of my own theory set forward, what do you say about that? What makes you conclude that the deduction i posted is completely illogical? Your primary source of disagreement seems to be die sizes, but what else?

The difference is 3 mmsq. If you truly feel its an independent product, can you provide evidence for this? Because i have a hard time believing that 3 independent sites have it completely wrong (and it must be a completely unique product).


But 32 MB ESRAM is practically nothing for any texture read/writes. Like Xenos, it is used as a seperate buffer to enable free anti aliasing and rapid fetch. But you need the 8 GB DDR3 aswell.

If it purely used ESRAM as GPU memory, you would have a GPU with the same memory size as a Nvidia TNT2 from 1999.

For this APU, they can set in the BIOS how much UMA memory it needs to access. Standard it uess 512 MB, but it can be set to 2 GB DDR3. This is a common APU parameter - The Ryzen G series also allow up to 2 GB UMA framebuffer to be used as video memory.

That there's a difference in die size at all is all the evidence you need to see these are independent products. If that isn't obvious to you, you're clearly not informed enough on the subject matter at hand.

Silicon based microprocessors are designed at the nm scale. And every die cut from a wafer, coming off the production lime from the fab will be identical in size. For the XB1 APU that is 363mmsq. This thing being different even by a single square mm proves conclusively it's a different chip, based on a different design.

Again, on the removal of eSRAM, you're failing to understand the fine grained intricacies of the design of on-die comms, control and I/O circuitry. It's not a software issue but a hardware design one. You cannot just take an APU chip with a GPU archetected around writing its framebuffer to an eSRAM scratchpad (designed specifically to accomodate the precise bandwidth and latency requirements) and just disable/fuse the eSRAM and hope for any reasonable GPU performance.

The fact this is not an XB1 but a new product intended to run games with a deliberately sized GPU and CPU, aimed at a specific target performance level, means it would to be designed from the ground up to work exclusively with off-die DDR3 instead of the DDR/eSRAM combo the XB1 APU launched with. For example, the added latency of all memory reads after losing the on-die scratchpad memory would necessitate a review of the GPU cache design, e.g. would almost certainly require more cache to avoid the significantly increased penalty of cache misses completely wrecking performance.

To your other points you argued previously about serial numbers suggesting a date of manufacture of this new chip, it's merely circumstantial at best. AMD made a lot of products in 2013, does that mean they're also XB1 APUs? Your reasoning doesn't follow sound logic.

Also, i'm pretty baffled by your sheer refusal to acknowledge the possibility that three independent tech media outlets can all be wrong, when they consistently are so often, especially given their obvious lack of domain knowledge, as evidenced by their naive claims that a chip with totally different memory subsystem design and clearly different die size is the same as another, simply because the CPU and GPU core counts align. It's misinformed at best and at worst, just plain ignorant.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
Jun 25, 2018
3,939
3,647
725
Stuck in 1Q84.
That there's a difference in die size at all is all the evidence you need to see these are independent products. If that isn't obvious to you, you're clearly not informed enough on the subject matter at hand.
They are well rebinned/obsolete/anything else based products that look very similar to an original Xbox One APU. i am not going to be daft and ignore that there is a difference whatsoever. And to be frank, prior to any of the mentioned sources, this thing for lack of a better word was confusing as all hell. It can clock itself to Xbox One X CPU speeds. Its GPU was mentioned by Chuwi as an R7 350, even though it clearly is not a R7 350. Its die looking incredibly similar but not one for one the same is confusing (And raised suggestion that it was, perhaps, a revision)? Heck, early one it was suggested that there was a seperate, discrete GPU outside the APU under a different cooler.

From the random nature of the quality of the drivers to the completely random quality of the motherboards that are provided in the dozen's, this thing has a crapton going for it that just confuses the heck. Hell even Anandtech came up with a memory speed for the DDR3 modules and GPU that made little sense (Dual channel when the entire APU seems designed for quad channel) and TCC had incredible difficulties getting a memory configuration that would actually work. To put short, this entire APU makes no sense.

Having said that, I don't see why you have to tell me that i am not informed enough in such an antagonizing way. You again state that the die size alone is enough of a difference to cast doubt on whether or not its an actual Xbox One APU. Given the confusing nature of the chip itself, that'' understandable. But do you have any source that provides additional evidence to your doubt that it is a fully independent product? That's what i am after. I am not denying that the die sizes are the same, mind you.

I fully buy into the theory that the XBO APU was/is used as a blueprint for a very similar looking APU, but i'd love to read verification on that. One aspect that helps your point of doubt is the fact that it uses an absolutely ancient procede - 28nm. AMD has been known to release new processors on super old procedes, of which the A9-9820 may very well be one.

Again, on the removal of eSRAM, you're failing to understand the fine grained intricacies of the design of on-die comms, control and I/O circuitry. It's not a software issue but a hardware design one. You cannot just take an APU chip with a GPU archetected around writing its framebuffer to an eSRAM scratchpad (designed specifically to accomodate the precise bandwidth and latency requirements) and just disable/fuse the eSRAM and hope for any reasonable GPU performance.
I don't absolutely believe its just a flick of a switch, obviously. But in a rational fashion, for it to use ESRAM would mean you either have Windows driver support for it, or tread it in the same way Intel did with CrystalWell - As a seperate cache recognized at the hardware level. The latter may very well be true, but nothing to date so far supports that.

To your other points you argued previously about serial numbers suggesting a date of manufacture of this new chip, it's merely circumstantial at best. AMD made a lot of products in 2013, does that mean they're also XB1 APUs? Your reasoning doesn't follow sound logic.
True, but how many products by AMD made in 2013 were an octacore Jaguar part? The best one could come up with is either an XBO APU or a completely unknown part based off two Athlon 53xx modules. I don't consider the latter a realistic possibility (Crafting a completely new processor, unknown to the public for several years to then appear in a obscure machine made for Asian customers?). Logical deduction would rather suggest the former, that its an Xbox One APU or some offshoot of it (Anandtech does call it an Durango+ for a reason)

What makes your reasoning sound more logic, then.

Also, i'm pretty baffled by your sheer refusal to acknowledge the possibility that three independent tech media outlets can all be wrong, when they consistently are so often, especially given their obvious lack of domain knowledge, as evidenced by their naive claims that a chip with totally different memory subsystem design and clearly different die size is the same as another, simply because the CPU and GPU core counts align. It's misinformed at best and at worst, just plain ignorant.
See the above. I think they can all be wrong given the confusing nature of the chip itself. However, they have provided quite a lot of evidence to back up what they write in words. By comparison, you disagree in full by stating a single point of argument - The die size.

Which, okay, is different, and can very well be reason its an off shoot. But are there outlets stating the same thing? Are there outlets taking screencaps/dieshots/dismantlements to reveal the inner details? That's what i am after.
 

Blond

Member
Dec 10, 2019
2,685
4,550
600
Sactown
Yes, and we already see PS5 being on par XSX and even pulling ahead in some titles due to some aspects of the PS5's GPU being faster by 22%.
Again, the Dirt update shows this victory lap is a joke. Developers made it clear that Sony was providing kits up to 2 years in advance before Microsoft so it only makes sense that PS5 has better performance *right now*

As APIs get updated and engineers get use to it you should REALLY expect the tables to turn. Saying this as a PS5 owner, Sony bros are in for a rude awakening in the coming months.
 
  • LOL
Reactions: Md Ray

Md Ray

Member
Nov 12, 2016
3,112
10,140
735
India
Again, the Dirt update shows this victory lap is a joke. Developers made it clear that Sony was providing kits up to 2 years in advance before Microsoft so it only makes sense that PS5 has better performance *right now*

As APIs get updated and engineers get use to it you should REALLY expect the tables to turn. Saying this as a PS5 owner, Sony bros are in for a rude awakening in the coming months.
I can see compute heavy games performing better on XSX and fillrate bound games to perform better on PS5.

I've heard this a lot from you guys... "2 TF is a PS4 worth of difference", "expect bigger difference between PS5 and XSX even larger than X1X and Pro come November". November came and gone and now it's "wait for screwdrivers".

TF isn't the be all and end all of GPU performance and devs have said before launch that both these consoles are very close in perf and it's proving to be right. PS5 GPU has a 22% faster triangle rasterization throughput among other benefits, and it shows.
 
Last edited: