• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Poll: RTX 3000, Big Navi, PlayStation 5, or Xbox Series X?

Which next-generation platform are you buying?

  • Ampere

    Votes: 84 16.7%
  • PlayStation 5

    Votes: 179 35.5%
  • Xbox Series X | S

    Votes: 24 4.8%
  • I'm waiting for AMD to reveal Big Navi

    Votes: 27 5.4%
  • Ampere & PlayStation 5

    Votes: 121 24.0%
  • Ampere & Xbox Series X | S

    Votes: 19 3.8%
  • Ampere & Waiting for Big Navi

    Votes: 4 0.8%
  • Playstation 5 & Xbox Series X | S

    Votes: 24 4.8%
  • Playstation 5 & Waiting for Big Navi

    Votes: 16 3.2%
  • Xbox Series X | S & Waiting for Big Navi

    Votes: 6 1.2%

  • Total voters
    504

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Yeah, basically it's gonna take a bit for the games to appear to begin with, and the next iterations will have better compression ratios. Might as well wait really.

If RTX I/O compression ratios are gonna get better, its going to get better for everyone RTX 20 and RTX 30....hell if DirectStorage gets better AMD GPUs will also get gains.....this is just software.

Just like DLSS went from 1.0 -> 1.9 -> 2.0 and got better and better with each iteration.....you didnt need to buy a new GPU for the software to work.
What you are saying is exactly the same as someone saying no point having bought an RTX 20 because it could only use DLSS 1.0 at launch.....but the same RTX 20 can use DLSS 2.0 right now.
So you want to wait till Hopper GPUs show up before you take advantage of software thats available right now and will improve on your current hardware?

By that logic couldnt someone also say. No point buying a Year 1 PS5 because the software will be better in Year 7 might as well wait really.
 

Sethbacca

Member
If RTX I/O compression ratios are gonna get better, its going to get better for everyone RTX 20 and RTX 30....hell if DirectStorage gets better AMD GPUs will also get gains.....this is just software.

Just like DLSS went from 1.0 -> 1.9 -> 2.0 and got better and better with each iteration.....you didnt need to buy a new GPU for the software to work.
What you are saying is exactly the same as someone saying no point having bought an RTX 20 because it could only use DLSS 1.0 at launch.....but the same RTX 20 can use DLSS 2.0 right now.
So you want to wait till Hopper GPUs show up before you take advantage of software thats available right now and will improve on your current hardware?

By that logic couldnt someone also say. No point buying a Year 1 PS5 because the software will be better in Year 7 might as well wait really.

Except that with ps5 the hardware will stay the same, with graphics cards the hardware literally changes and affects the speed at which it can actually handle the effects once they start being implemented.

A ps5 will handle all games that come out for it equally over the lifetime of the generation unlike a graphics card. All I'm saying is it isn't worth buying into it early at least from my perspective. For you it may be different, your money and all.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Except that with ps5 the hardware will stay the same, with graphics cards the hardware literally changes and affects the speed at which it can actually handle the effects once they start being implemented.

A ps5 will handle all games that come out for it equally over the lifetime of the generation unlike a graphics card. All I'm saying is it isn't worth buying into it early at least from my perspective. For you it may be different, your money and all.

RTX IO, DirectStorage and DLSS are software solutions?
The whole point of having these APIs set is that with your current hardware you will benefit as the software gets better.
If the compression ratio is 2:1 on RTX 20, its 2:1 on RTX 30....if in 2022 DirectStorage and new algorithms lead to 3:1 compression ratios then your RTX 20/30 will now be able to achieve 3:1 compression ratios.
Thats why i used the example of the PS5 even if its hardware stays the same the software gets better.....the exact same thing is true with these software solutions as the software gets better they perform better on the same hardware....so waiting for a new hardware generation to use the exact same software isnt exactly sound.

Waiting for Hopper to use these current technologies doesnt make much sense, else you will be waiting forever.
 

Sethbacca

Member
RTX IO, DirectStorage and DLSS are software solutions?
The whole point of having these APIs set is that with your current hardware you will benefit as the software gets better.
If the compression ratio is 2:1 on RTX 20, its 2:1 on RTX 30....if in 2022 DirectStorage and new algorithms lead to 3:1 compression ratios then your RTX 20/30 will now be able to achieve 3:1 compression ratios.
Thats why i used the example of the PS5 even if its hardware stays the same the software gets better.....the exact same thing is true with these software solutions as the software gets better they perform better on the same hardware....so waiting for a new hardware generation to use the exact same software isnt exactly sound.

Waiting for Hopper to use these current technologies doesnt make much sense, else you will be waiting forever.

It's entirely possible I'm misunderstanding the tech but I was under the impression that the card had a hardware decompression block. I wasn't aware it was a purely software solution. I haven't spent a whole lot of time reading up on it as I have no intention of upgrading anytime soon. It was really more of a "that's neat, let's see where it goes" kind of overview on my part. So yeah I guess in that scenario then you would be right.
 
Last edited:

Warnen

Don't pass gaas, it is your Destiny!
Most likely going to build a desk top. Got all the parts in my carts online. Waiting to see if I can get a preorder on a 3080.
 

Airbus Jr

Banned
Its between Nvidia and AMD for me this year

Xbox and Playstation are dead, meaningless and serve no purpose to me now since they brought their game to PC
 
Last edited:

BluRayHiDef

Banned
Its between Nvidia and AMD for me this year

Xbox and Playstation are dead, meaningless and serve no purpose to me now since they brought their game to PC
This is a silly mindset; only old PlayStation games are being ported to PC. Hence, if you want to play PlayStation games in a reasonable time after their release, you should get a PlayStation.
 
Last edited:

Airbus Jr

Banned
This is a silly mindset; only old PlayStation games are being ported to PC. Hence, if you want to play PlayStation games in a reasonable time after their release, you should get a PlayStation.

And im in no hurry to rush to buy a new game for ps5

As long as i know the game is coming down the line to PC someday thats good enough for me

My game catalogue is big enough to keep me busy to wait ( beside jobs and family those thing will keep me occupied)
 
Last edited:

Vawn

Banned
PS5 day one. Xbox likely a year or two later, assuming there are a few exclusives I want by that time.
 

BluRayHiDef

Banned
And im in no hurry to rush to buy a new game for ps5

As long as i know the game is coming down the line to PC someday thats good enough for me

My game catalogue is big enough to keep me busy to wait ( beside jobs and family those thing will keep me occupied)

Yes, but there is no guarantee that every PlayStation game is coming to PC. Have any of Naughty Dog's games been announced for PC? Sony Santa Monica Studio's? Bend Studio's?
 
Last edited:

Ascend

Member
Anyone who is not willing to wait for big navi is either only interested in consoles, or they are making a mistake. I'll give only one hint.

Samsung 8nm vs TSMC 7nm.
 

Airbus Jr

Banned
This is a silly mindset; only old PlayStation games are being ported to PC. Hence, if you want to play PlayStation games in a reasonable time after their release, you should get a PlayStation.

Yes, but there is no guarantee that every PlayStation game is coming to PC. Have any of Naughty Dog's games been announced for PC? Sony Santa Monica Studios'? Bend Studio's?

Whatever is is after the PS game coming to PC news couple days ago i already have my mind set

Its PC ( Nvidia or AMD) + next Nintendo system for me
 
Last edited:

llien

Member
Anyone who is not willing to wait for big navi is either only interested in consoles, or they are making a mistake. I'll give only one hint.

Samsung 8nm vs TSMC 7nm.
Not waiting to see the entire set of offerings is indeed not wise.

For 8nm (which was said to be "just rebranded 10nm") vs 7nm, note the following:

7nm DUV TSMC
250mm² 5700XT => 10.3 billion transistors => 41.2 million transistors per mm²

8nm Samsung
627mm² 3080 => 28 billion transistors => 44.6 million transistors per mm²

So 8nm Samsung actually beats TSMC 7nm DUV at transistor density (of course "how chips are designed" reservations apply).
 

Clear

CliffyB's Cock Holster
$700 on a GPU just seems like a needless extravagance to me, no matter how good it is.
 

Papacheeks

Banned
It's entirely possible I'm misunderstanding the tech but I was under the impression that the card had a hardware decompression block. I wasn't aware it was a purely software solution. I haven't spent a whole lot of time reading up on it as I have no intention of upgrading anytime soon. It was really more of a "that's neat, let's see where it goes" kind of overview on my part. So yeah I guess in that scenario then you would be right.

It dedicates some of the cores for decompression if I read that slide right. I didn't see anything talking about a second dedicated chip on the board that just handles this?

Unless I misread.
 

Papacheeks

Banned
$700 on a GPU just seems like a needless extravagance to me, no matter how good it is.

It's more or less for people who want to future proof. SO if you game at 1440p @144hz, a 3070 is what you should buy probably. If you want to hook it up to your tv using HDMI 2.1 and get all the amazing HDR goodness, then a 3080 is probably the better bet.

But basically these cards will last you a lot longer than most cards. These are similar to how good the 1000 series was, in how long you were able to use them for games.
 

Mr. Nobody

Member
RTX 3080. The price to performance that I saw yesterday was what I needed to hear. That and all the games that I want to play will be available for cheaper on PC either day one or eventually. All that plus I'm favoring KB&M over controllers.
 

Papacheeks

Banned
There are people, like myself, that never want to see our FPS to drop below 60......EVER.

Yup. When I bought my Radeon VII a year ago, I was like if I'm going from 1080p 144hrz, to 1440p 165hz, then i need something that will keep games over 60. Was a good decision. And my next card will be in the $600-700 range as well.

Reading the comments , GAF is full of ballers

Or people with income to pay for all these nice toys. I mean I'm cautious with my money because I've also trying to buy a house soon. But I'm super cheap these days when it comes to things like groceries, cloths things like that.

All that money saved goes into my Motorcycle/Video games/House savings.

But also know that people buying $1400 or $700 gpu's are going to use them for 4-5 years if they can. When you look at it over the course of 3-4 years it's actually not a lot.
 
Last edited:

Mr. Nobody

Member
Anyone who is not willing to wait for big navi is either only interested in consoles, or they are making a mistake. I'll give only one hint.

Samsung 8nm vs TSMC 7nm.

That is assuming two things...

1. The process used to make the chips will make a huge difference.

2. That AMD will pull a "Ryzen" on NVidia. Unlike Intel, NVidia hasn't slowed down on their R&D the way that Intel has recently.

I hope that I am wrong on both. The GPU market really needs some competition.
 

Mr. Nobody

Member
Yup. When I bought my Radeon VII a year ago, I was like if I'm going from 1080p 144hrz, to 1440p 165hz, then i need something that will keep games over 60. Was a good decision. And my next card will be in the $600-700 range as well.

Radeon VII? That must of been a tough find.
 

Papacheeks

Banned
That is assuming two things...

1. The process used to make the chips will make a huge difference.

2. That AMD will pull a "Ryzen" on NVidia. Unlike Intel, NVidia hasn't slowed down on their R&D the way that Intel has recently.

I hope that I am wrong on both. The GPU market really needs some competition.

I'll let you in on something RDNA 2 is a starting path to the real game changer for GPU's which is RNDA3-4 which will use a chiplet design similar to ryzen. AMD will have this before NVIDIA. Why do you think this feels rushed?

Moore's law talks about this, that final drivers are not ready yet, and them getting these out so soon with very little stock shows they want to be a talking point for the next couple months. Once next gen starts in Novmeber going into 2021 we are going to get leaks on RDNA 3 and possibly the chiplet talk will start.

Chiplets are going to revolutionize gpu game. RTX 3000 are powerful but they are just better more power hungry versions of turing. Imagine a board with a gpu that has a chiplet design? more cores, but how they talk and are subsected means better efficiency in power usage and possibly less heat since they will be sub-sectioned and designated to do different things so not all are running trying to rasterize or do ray tracing maybe some of them are in the back doing decompression or talking to a dedicated decompression chip that also down the road will talk to onboard NVME ssd storage thats on the video board itself.


In the next year or so shit is going to be wild as fuck.

Radeon VII? That must of been a tough find.

I preorderd it on launch lol.

up till the last 6 months they were easy to find.
 
Last edited:

Ascend

Member
Not waiting to see the entire set of offerings is indeed not wise.

For 8nm (which was said to be "just rebranded 10nm") vs 7nm, note the following:

7nm DUV TSMC
250mm² 5700XT => 10.3 billion transistors => 41.2 million transistors per mm²

8nm Samsung
627mm² 3080 => 28 billion transistors => 44.6 million transistors per mm²

So 8nm Samsung actually beats TSMC 7nm DUV at transistor density (of course "how chips are designed" reservations apply).
Well...

TSMC 7nm DUV has a transistor density of 96.5 MTr/mm2. Source
Samsung's 8nm has a transistor density of 61.18 MTr/mm2. Source

For some reason, AMD went really lenient on their Navi 10 architecture. Considering the die size of the Xbox Series X though, they must have increased their transistor density for RDNA2.
 
Last edited:

Papacheeks

Banned
Well...

TSMC 7nm DUV has a transistor density of 96.5 MTr/mm2. Source
Samsung's 8nm has a transistor density of 61.18 MTr/mm2. Source

For some reason, AMD went really lenient on their Navi 10 architecture. Considering the die size of the Xbox Series X though, they must have increased their transistor density.

RDNA 1 from what I know on insiders said, that it was one of those decisions to get it out to have a new product for last half of 2019. Hence why it had no ray tracing, was power hungry even on 7nm. Also explains the rushed drivers which took them a long time to try and correct.

There's areason for RDNA 2 they have been radio silent. Part of that is because the consoles are using RDNA 2 chips so they cant come out and do a giant slid/deep dive into the arc like nvidia did yesterday. because it would not take long for the internet nerds like myself to dissect all of the info and theorize pretty accurately what PS5/XBOX S X were capable of compared to the retail gpu counterpart.

If you look at it this way RDNA 1 was a test beta, I mean in pure performance the 5700xt stacks well against a 2070-2070 super. ANd for at a $-100 less. Imagine a even better 7nm process, more time for drivers, and feedback from developers/Xbox/Sony on API with dIRECT STORAGE and all that?

Means these things will be competitive, and work well. The only question is what product are they trying to compete with at what price point. And I would argue if you can get a RDNA 2 card for even cheaper than $500 that has more memory and beats a 3070, the next wave of RDNA 3 is going to blow people away and you may want to go mid range instead of BIG NAVI.
 

888

Member
I’m just gonna stick with getting a few 3080s and chill on the consoles for a while until I feel a need to get one.
 

Ascend

Member
RDNA 1 from what I know on insiders said, that it was one of those decisions to get it out to have a new product for last half of 2019. Hence why it had no ray tracing, was power hungry even on 7nm. Also explains the rushed drivers which took them a long time to try and correct.

There's areason for RDNA 2 they have been radio silent. Part of that is because the consoles are using RDNA 2 chips so they cant come out and do a giant slid/deep dive into the arc like nvidia did yesterday. because it would not take long for the internet nerds like myself to dissect all of the info and theorize pretty accurately what PS5/XBOX S X were capable of compared to the retail gpu counterpart.

If you look at it this way RDNA 1 was a test beta, I mean in pure performance the 5700xt stacks well against a 2070-2070 super. ANd for at a $-100 less. Imagine a even better 7nm process, more time for drivers, and feedback from developers/Xbox/Sony on API with dIRECT STORAGE and all that?

Means these things will be competitive, and work well. The only question is what product are they trying to compete with at what price point. And I would argue if you can get a RDNA 2 card for even cheaper than $500 that has more memory and beats a 3070, the next wave of RDNA 3 is going to blow people away and you may want to go mid range instead of BIG NAVI.
It will be interesting to see, because, Samsung's 8nm (which is actually an optimized 10nm) is a lot cheaper than TSMC's 7nm, although they also do have worse yields. But according to Moore's Law is Dead, nVidia managed to get a deal with Samsung to only pay for working dies, so they don't have to worry about yields. So even if die sizes are smaller for RDNA2, they might still not be able to charge less for them compared to nVidia, simply because nVidia's deal with Samsung is better than AMD's deal with TSMC.

I don't doubt that RDNA2 can reach the performance of at least the RTX 3080, and likely do it with less power consumption. The question is if that card could be profitable for them if they undercut nVidia.
 

llien

Member
If you look at it this way RDNA 1 was a test beta, I mean in pure performance the 5700xt stacks well against a 2070-2070 super. ANd for at a $-100 less. Imagine a even better 7nm process, more time for drivers, and feedback from developers/Xbox/Sony on API with dIRECT STORAGE and all that?
Well, all that is a cool theory, but we have even AMD friendly leaker referencing just 3070 (i.e. A104 cannot beat big NAVI).
It means product stacking like this:

3070 < < < 6800 < 3080

Also note that 3080 is a harvested 627mm2, whereas "big navi" is expected to be 505mm2 or 485mm2, quite a bit smaller.

TSMC 7nm DUV has a transistor density of 96.5 MTr/mm2. Source
Samsung's 8nm has a transistor density of 61.18 MTr/mm2. Source

For some reason, AMD went really lenient on their Navi 10 architecture. Considering the die size of the Xbox Series X though, they must have increased their transistor density for RDNA2.
Not going with too dense designs normally allows to go for higher clocks. Just my personal impression from CGN era (number of CUs and transistors in 480 was on par with 1080).
Could you estimate the figures for XSeX RDNA2 chip?
 
PS5 + PC. Likely won’t upgrade until 2021, but if the right sale happens, I have $2g sitting in the coffers for when opportunity knocks.
 

Papacheeks

Banned
It will be interesting to see, because, Samsung's 8nm (which is actually an optimized 10nm) is a lot cheaper than TSMC's 7nm, although they also do have worse yields. But according to Moore's Law is Dead, nVidia managed to get a deal with Samsung to only pay for working dies, so they don't have to worry about yields. So even if die sizes are smaller for RDNA2, they might still not be able to charge less for them compared to nVidia, simply because nVidia's deal with Samsung is better than AMD's deal with TSMC.

I don't doubt that RDNA2 can reach the performance of at least the RTX 3080, and likely do it with less power consumption. The question is if that card could be profitable for them if they undercut nVidia.

Here's the second half of that coin, AMD has gone directly to TSMC for all of their fabrication on 7nm with laptops, Desktop cpu's, server cpu's, and now GPU's. They bought tons of space to be taped out. Which is why Nvidia kind of got fucked when it came to going with a efficient node.

AMD has leverage right now because of the amount of volume they are doing. The more volume you do the better price overall you will have for chips that are being made on the same node process which almost everything currently for AMD is on 7nm or 7nm+.

TSMC wanted a good amount of money because they would have to make more room in their factory some how for nvidia, and since nvidia would not be making as much as AMD's combined chips, it would cost more for nvidia to be on that node.

nvidia wanted a better price, and for the capacity that was available nvidia was not happy with. SO they went samsung.

This gen is going to be wild, I'v never been this excited to talk about tech since 2007. When direct X 10 cards came out and multicore cpu's from amd and intel were starting to hit.
 

iHaunter

Member
My 2080Ti will last me for a bit still, I'll wait for next-gen to drop. Maybe the next GPUs can give me handys under my desk.
 

Papacheeks

Banned
Well, all that is a cool theory, but we have even AMD friendly leaker referencing just 3070 (i.e. A104 cannot beat big NAVI).
It means product stacking like this:

3070 < < < 6800 < 3080

Also note that 3080 is a harvested 627mm2, whereas "big navi" is expected to be 505mm2 or 485mm2, quite a bit smaller.


Not going with too dense designs normally allows to go for higher clocks. Just my personal impression from CGN era (number of CUs and transistors in 480 was on par with 1080).
Could you estimate the figures for XSeX RDNA2 chip?

Right, And I think stack wise that they will compete in that section. Where things get interesting is the rumors of memory config being 12gb-16gb. And if the efficiency is that good and base clocks show really good power/efficiency at a base of lets say 1750-1800mhz which is what my Radon VII is at. I wonder if you could push that if you dont care about efficiency to 1900+ which a lot of people have done on Radeon VII. That to me would give you a lot of more performance on the cheap as long as you have adequate cooling.
 
Last edited:

MH3M3D

Member
I'm waiting to see what AMD comes up with. They put 12 TF on a CPU+GPU (APU). I'm excited to see what they can do with a dedicated GPU with more headroom for powerdraw and cooling. If the raytracing performance is good and if they can come up with a DLSS-like feature at a good price, then I'll definitely consider an AMD card. Equally important for me is having a whisper quiet card. I have little faith in 300+ Watt cards being quiet and having a cooling solution that doesn't drive up the price.
 

Papacheeks

Banned
I'm waiting to see what AMD comes up with. They put 12 TF on a CPU+GPU (APU). I'm excited to see what they can do with a dedicated GPU with more headroom for powerdraw and cooling. If the raytracing performance is good and if they can come up with a DLSS-like feature at a good price, then I'll definitely consider an AMD card. Equally important for me is having a whisper quiet card. I have little faith in 300+ Watt cards being quiet and having a cooling solution that doesn't drive up the price.

TOO much emphasis on DLSS, people need to understand that Engine optimization is what is going to help even more than something that is propriety of nvidia. Some games like Death Stranding it helps make a better 4k image. But I would argue that engine at that time, was designed for a subset of hardware that was 7 years old. Unreal 4 games seem to scale super well without DLSS. I want to see more games bein natively rendered.

DLSS is something that was designed to give better performance on turing because of the giant hit Ray tracing took. Newer engines designed around new hardware like Unreal 5 are going to show that DLSS isn't the be all.
 

Entroyp

Member
Playstation 5 and Ampere whenever Zen 3 chips are released. (I’m building my first gaming PC since 2004)
 
Top Bottom