• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Ratchet & Clank: Rift Apart will run at dynamic 4K resolution, targets 60 FPS for performance mode

It's funny reading this thread full of people saying that 4k/60 fps/ray tracing is completely impossible, while the very next thread down is about Gran Turismo 7 trying for exactly that.

Personally, I think it's going to be very rare, especially in the bigger AAA next gen only titles.

Obviously things will be different for cross gen and smaller titles.
 
With the complexity of the rendering these days, it's the GPU. We can disagree but unless a game intentionally becomes CPU limited, we'll see more GPU limited games in the future. Let's agree that you need a pretty good CPU to feed data to the GPU at a good clip, but as scene complexity and rendering complexity increase, the GPU becomes more the bottleneck -- NOT the CPU.


The gpu is number one of course, but the cpu is still substantial if you want to have those high minimum frames and smooth experience. Folks at digital foundry managed to isolate particular instances in games where you're cpu bottlenecked even at 1440p.

These days, after i managed to get myself a 3080, i switched the cpu and the ram as well. An intel i5 8400 is a pretty good cpu, 6 core, boost to 4 ghz. Still, a lot of hitching and stuff. Swapping that for an i9 9900k was a major improvement. Even the ram. I was using 16 gigs at 2666. Going to 16 gigs at 3600 smoothed spikes even further.
 

Neo_game

Member
The same people on here spouting that resolution doesn't matter were probably beating their dicks and slinging feces when PS4 games were in 1080p and Xbox One was 900p.

I wonder why the change of tone?

720P gen to 1080P gen was 2x. 1080P to 4K is 4x the pixels count which is pretty big and not a fair comparison I think. 1080P is 2million pixels, 4K is 8million. 8K is even more ridiculous with 33million pixels. TV manufactures have gone crazy.
 

iHaunter

Member
I don't care about resolution, I care about FPS and Asset Quality. Which is an option they have. I hope there's a collector's addition.
 

scydrex

Member
Problem with Zen 2 is only worse latencies, but they're far better here on these APUs.
And no matter if "there's better CPUs out there", what matters is that these new CPUs are much, much faster than the old Jaguars. Look what the current video games achieved with those weak CPUs, think what can be done with these new CPUs that are between 6 to 8 times faster.

But I agree, they should had waiting another year to have Zen 3 and a more mature RT solution on RDNA2+.

PS5 Pro and Xbox Series X2 or XX or X² will have maybe RDNA2+. I do not want this gen anymore with weak ass Jaguar cpu. 7 years already... that's enough.
 
Last edited:

Reindeer

Member
Problem with Zen 2 is only worse latencies, but they're far better here on these APUs.
And no matter if "there's better CPUs out there", what matters is that these new CPUs are much, much faster than the old Jaguars. Look what the current video games achieved with those weak CPUs, think what can be done with these new CPUs that are between 6 to 8 times faster.

But I agree, they should had waiting another year to have Zen 3 and a more mature RT solution on RDNA2+.
I never said that Zen2 isn't a huge upgrade over Jaguar, it obviously is, but it's not as great as some people think it is. I think the reason they didn't wait for Zen3 and RDNA3 (2+?) was because it would mean they would have to make APUs on 5nm that costs twice as much for a wafer ($18000 vs 9000). Financially this makes plenty of sense for products that target affordable price, but from technological point of view I'm personally somewhat disappointed. Still, we should have much better base consoles than we did last gen, let's just hope XSS is phased out after 4-5 years as it it will likely become hindrance towards the latter half of next gen if not earlier.
 
Last edited by a moderator:

Mentat02

Banned
the PS5 custom I/O will take some load off the GPU and the SSD will stream some assets to play games at higher performance without sacrificing visual fidelity. This is sort of what Sony has been claiming and not me.
 

Reindeer

Member
the PS5 custom I/O will take some load off the GPU and the SSD will stream some assets to play games at higher performance without sacrificing visual fidelity.
Yeah sure, we can see the evidence of that in Spiderman Remastered 60fps mode that is visually downgraded in some areas compared to original. The SSD upgrades sound all good and dandy, but I'll wait to see actual proof of this claim with my own eyes before I believe it.
 
Last edited by a moderator:

pyrocro

Member
I was not referring to PC parts or big NAVI 80 CU or whatever it turns out to be. Big Navi is leaked to 10 CU shader arrays and frequency focus as Ps5, strange that.

XSX is not big NAVI LOL.
sorry since my words didn't say that at all, I guess you got that from it, my bad:messenger_confused:.

It's All RDNA2, same behavioral characteristics, TF counted the same way


If only we had somehing called maths, where L1 feeding 10 CU at 2.23 Ghz will perform much better than an L1 cache feeding 14 CU at 1.825 GHz. I guess 12 and 10 numbers are the limit of peoples math capability
wow dude the Ad hominems, pointing out 12 is bigger then 10 is not a personal attack take it easy.
but I see this is par for the course with you.

This is only a real issue on PC because of the vast number of cards out there but the very nature of a console makes this a none issue as the shader compiler and plain counting on the developers part can easily overcome this as the hardware is immutable and is optimized for, which is why shaders on consoles are pre-compiled.
if you can count to 2 you can count to 10 and 14.

There is also the fact that the whole GPU queue is feed off of available memory bandwidth, and XSX GPU has more.
when the GPU is doing work it is also constantly reading from memory (SIMD) any one scene does not fit in L1 and L2,
having a faster fillrate and what not does not help much if you're constantly having to go back to a slower pool of memory to complete a scene, which is why the increase bandwidth on the 3000 NVidia cards has such a great impact on performance(go check any review).

with your logic the PS5 better hope and Pray nothing leaves L1 and L2 ( which is silly of course, because Textures and Vertex data :messenger_pensive:).
Reminds me of the PS2 insane theoretical fillrate, it could not be taken advantage of fully because everything before it was slow.

I'm not sure if your trying to make a real argument or not.
If you are you're just intentionally leaving out a lot of factors.


Feeding CU is just as important as the number of them, its only MS who gauge performance in TF, everyone else uses benchmarks and how games perrform.
Having fast bandwidth is even more important as this is where all the scene data resides, see how all of this is cnnected.

The whole industry Uses TF, stop acting as though Sony is not the TF King since PS2, remember CELL.


And TF has nothing to do with the method of ray tracing, do you think Ps5 is doing it the same as others. Have you heard of local Ray. If Ps5 is using FP16 for ray tracing and oher tricks in the local ray 32 patents then it could be doing half the RT work, what does that do for your performance maths ?
are they using FP16 or does TF not matter (what you think FP16 is jeeez, how do you think they get the TF number hint it's in the acronym)
Which is it? spare us the double speak please. do the FP's need to be calculated or not?(rhetorical question)
which is why 12+ is more significant than ~10

So Microsoft authored and designed DXR years before we had the First implementation, Nvidia and AMD spends Billions developing products around DXR compatibility.
But it's none of these companies that have the advantage it's Sony who has the least time and money investment in Raytracing that has all the advantages. gully G, what a company.


Lets see, power divided by half erm....

Have you noticed the amount of ps5 ray traced titles coming, strange.

Wow the logic and understanding you got there is not so good. When XSX has a good looking next gen title with some ray tracing, come back and talk.
back to this again,
12+ is more than ~10, get over it,
also pretending bandwidth is not important just exposes you.

your choosing to die on the PS5 is more powerful hill I'm happy to accommodate you, as this is easily proven in under 2 months.

but please continue to stake your flag on that ant mound.
 

Mentat02

Banned
Yeah sure, we can see the evidence of that in Spiderman Remastered 60fps mode that is visually downgraded in some areas compared to original. The SSD upgrades sounds all good and dandy, but I'll wait to see actual proof of your claim with my won eyes.
I'm saying that is what Sony has been touting. personally I think it would very much benefit the newer open world games but at the end of the day the Power of a GPU will still determine performance and visual fidelity.
 

Mister Wolf

Member
Yeah sure, we can see the evidence of that in Spiderman Remastered 60fps mode that is visually downgraded in some areas compared to original. The SSD upgrades sounds all good and dandy, but I'll wait to see actual proof of your claim with my own eyes.

Agreed. Its smoke and mirrors until proven otherwise.
 

Boglin

Member
the PS5 custom I/O will take some load off the GPU and the SSD will stream some assets to play games at higher performance without sacrificing visual fidelity. This is sort of what Sony has been claiming and not me.

The custom I/O takes load off the CPU for decompression while streaming in assets can free up some reserved vram.
Nothing Sony has shown will help it make up for the difference between its and XSX GPU and as far as I know, they haven't claimed otherwise.
 
The custom I/O takes load off the CPU for decompression while streaming in assets can free up some reserved vram.
Nothing Sony has shown will help it make up for the difference between its and XSX GPU and as far as I know, they haven't claimed otherwise.

Sony can't magically make up for having less compute units in the GPU. They can load scenes faster, but not render them at 4K/60fps. Whether gamers will care in the end, who knows. It may come down to marketing.
 

Boglin

Member
Sony can't magically make up for having less compute units in the GPU. They can load scenes faster, but not render them at 4K/60fps. Whether gamers will care in the end, who knows. It may come down to marketing.

I know, that's why I said Sony has shown nothing that can make up the difference.
 
I know, that's why I said Sony has shown nothing that can make up the difference.

People DO understand that Sony has some of the smartest graphics programmers IN THE WORLD. We CREATED the Cell. So while people laude MS for their HW skills, do you really think we dont know how to build a system optimized for maximizing graphics for programmers? Seriously? There is no way were giving up a 30%+ advantage to MS.
 

geordiemp

Member
sorry since my words didn't say that at all, I guess you got that from it, my bad:messenger_confused:.

It's All RDNA2, same behavioral characteristics, TF counted the same way



wow dude the Ad hominems, pointing out 12 is bigger then 10 is not a personal attack take it easy.
but I see this is par for the course with you.

This is only a real issue on PC because of the vast number of cards out there but the very nature of a console makes this a none issue as the shader compiler and plain counting on the developers part can easily overcome this as the hardware is immutable and is optimized for, which is why shaders on consoles are pre-compiled.
if you can count to 2 you can count to 10 and 14.

There is also the fact that the whole GPU queue is feed off of available memory bandwidth, and XSX GPU has more.
when the GPU is doing work it is also constantly reading from memory (SIMD) any one scene does not fit in L1 and L2,
having a faster fillrate and what not does not help much if you're constantly having to go back to a slower pool of memory to complete a scene, which is why the increase bandwidth on the 3000 NVidia cards has such a great impact on performance(go check any review).

with your logic the PS5 better hope and Pray nothing leaves L1 and L2 ( which is silly of course, because Textures and Vertex data :messenger_pensive:).
Reminds me of the PS2 insane theoretical fillrate, it could not be taken advantage of fully because everything before it was slow.

I'm not sure if your trying to make a real argument or not.
If you are you're just intentionally leaving out a lot of factors.



Having fast bandwidth is even more important as this is where all the scene data resides, see how all of this is cnnected.

The whole industry Uses TF, stop acting as though Sony is not the TF King since PS2, remember CELL.



are they using FP16 or does TF not matter (what you think FP16 is jeeez, how do you think they get the TF number hint it's in the acronym)
Which is it? spare us the double speak please. do the FP's need to be calculated or not?(rhetorical question)
which is why 12+ is more significant than ~10

So Microsoft authored and designed DXR years before we had the First implementation, Nvidia and AMD spends Billions developing products around DXR compatibility.
But it's none of these companies that have the advantage it's Sony who has the least time and money investment in Raytracing that has all the advantages. gully G, what a company.



back to this again,
12+ is more than ~10, get over it,
also pretending bandwidth is not important just exposes you.

your choosing to die on the PS5 is more powerful hill I'm happy to accommodate you, as this is easily proven in under 2 months.

but please continue to stake your flag on that ant mound.

You realise bandwidth I am talking about is L2 to L1 and L1 to L0 shaders and bandwidth on silicon, you sticking with RAM bandwidth just shows your lack of understanding and exposes you .

The feeding of CU is done by caches, and L2 to L1 then L1 to the L0 shader arrays.

Ps5 feeding the CU from caches will be MUCH faster, both Ps5 and XSX have 4 shader arrays, both have 4 L1 cahes. ps5 clock is 20 % more and the shader array is 10 CU in ps5 unlike 14 CU on XSX.

PC parts will have shader arrays of 10 CU and GPU clocks over 2 GHz, go figure.

Fp16 I was talking about ray tracing and Local ray, go read up on it, again your confuising yourself.


God know what you blabbing about Nvidia and ray tracing and money and sony, all we know someone has adopted the 32 patents of the Israel company for low cost ray tracing, go figure it out for youself. Its been announced by the company that a major console manufactuerer has adopted the tech in consoles.

You stick with your simple 12 and 10, its easy on you and your like. Its funny when posters dont understand anything all they know is teraflopies, its sweet.

Ps5 games with ray tracing look superior so far on console, GET OVER IT<
 
Last edited:

Reindeer

Member
That's fine, but I'm talking about this thread, where people were referred to as stupid for expecting something that is completely different than that.
Believe me, there was a lot of stupidity on this forum when it came to expectations from these consoles. There were people expecting and touting things that were not physically possible for the limited hardware that's in these boxes. Fanboyism truly blinds people.
 

SafeOrAlone

Banned
Believe me, there was a lot of stupidity on this forum when it came to expectations from these consoles. There were people expecting and touting things that were not physically possible for the limited hardware that's in these boxes. Fanboyism truly blinds people.

I believe you. I'm sure some people did and it was annoying to read.
 

pyrocro

Member
You realise bandwidth I am talking about is L2 to L1 and L1 to L0 shaders and bandwidth on silicon, you sticking with RAM bandwidth just shows your lack of understanding and exposes you .

The feeding of CU is done by caches, and L2 to L1 then L1 to the L0 shader arrays.
LOL it's not even cleaver now, pulling that straw man out your ass, I see

No need from VRAM folks according to you it all about the cache hierarchy, just stop dude.
BANDWIDTH is part of the equation with more of an effect on performance.


Ps5 feeding the CU from caches will be MUCH faster, both Ps5 and XSX have 4 shader arrays, both have 4 L1 cahes. ps5 clock is 20 % more and the shader array is 10 CU in ps5 unlike 14 CU on XSX.

PC parts will have shader arrays of 10 CU and GPU clocks over 2 GHz, go figure.
All of this depends on memory bandwidth.

Fp16 I was talking about ray tracing and Local ray, go read up on it, again your confuising yourself.
the reason why people say TF is not a true indicator of power is because its calculated from all parts of the GPU and its not all functionally additive
but FP16 is FLoatingPoint16 if you are operating on floats your doing a FLOP(Floating Point calculation)
You can't mention FP16 and act as though me saying you need to calculated is different from doing the actually ray tracing. you need to perform some operation the FP16. lol, jeeez
Thus in this case TF is relevant, 12 > 10 for your FP16 moving around in the GPU being operated on(its more of the same kind of FLOPs on XSX)

God know what you blabbing about Nvidia and tray tracing and money and sony, all we know someone has adopted the 32 patents of the Israel company for low cost ray tracing, go figure it out for youself. Its been announced by the company that a major console manufactuerer has adopted the tech in consoles.

You stick with your simple 12 and 10, its easy on you and your like. Its funny when posters dont understand anything all they know is teraflopies, its sweet.

Ps5 games with ray tracing look superior so far on console, GET OVER IT<
More with the Ad hominems, because 12 is not more than 10. lol. this is not a hard thing to get over.
but its plain to see your trying very hard to remove the bandwidth out of the equation, but what holds all of the scene data
all those caching levels(L1, L2...) are just copies of incomplete parts of a scene. to render the scene you need to constantly read from VRAM. but I guess you don't want it to work like that, tough luck it just does.

atleast stay on point FFS. 12 is > 10. and BANDWIDTH---feeds--->Cache. but according to you lets just look at this part here where the numbers are higher but depend on this other thing you don't want to mention.
brilliant head in the sand Strat. everything else is important but the bandwidth that has ALL of the DATA to be rendered(according to you).
 
Last edited:
It's interesting how the narrative changes in 7 years. In 2013 1080p vs 900/720p was the only discussion. Now it's 1440p/60fps RT vs 4K/60fps no RT? I'm really confused now.

Microsoft games are not going to have dynamic RT, they'll all look ugly? Is that what the new story is?
 
Last edited:

bitbydeath

Member
Can you provide evidence for such claim (for example developer working on XSX game who would said something like that), because with SFS texture bandwidth reduction XSX should be able to transfer high quality textures as well as PS5. Maybe you wrote a lie on purpose just to provoke people? If the latter is the case then you are the one who should be banned, not Bodomism who responded to you with similar tone.

I did on the previous page.


This was common knowledge months ago so figured everyone here already knew.
 

geordiemp

Member
LOL it's not even cleaver now, pulling that straw man out your ass, I see

No need from VRAM folks according to you it all about the cache hierarchy, just stop dude.
BANDWIDTH is part of the equation with more of an effect on performance.



All of this depends on memory bandwidth.


the reason why people say TF is not a true indicator of power is because its calculated from all parts of the GPU and its not all functionally additive
but FP16 is FLoatingPoint16 if you are operating on floats your doing a FLOP(Floating Point calculation)
You can't mention FP16 and act as though me saying you need to calculated is different from doing the actually ray tracing. you need to perform some operation the FP16. lol, jeeez
Thus in this case TF is relevant, 12 > 10 for your FP16 moving around in the GPU being operated on(its more of the same kind of FLOPs on XSX)


More with the Ad hominems, because 12 is not more than 10. lol. this is not a hard thing to get over.
but its plain to see your trying very hard to remove the bandwidth out of the equation, but what holds all of the scene data
all those caching levels(L1, L2...) are just copies of incomplete parts of a scene. to render the scene you need to constantly read from VRAM. but I guess you don't want it to work like that, tough luck it just does.

atleast stay on point FFS. 12 is > 10. and BANDWIDTH---feeds--->Cache. but according to you lets just look at this part here where the numbers are higher but depend on this other thing you don't want to mention.
brilliant head in the sand Strat. everything else is important but the bandwidth that has ALL of the DATA to be rendered(according to you).

Look at this and go read up on CU utilisation, get back to me when we can discuss,

You read somewhere about bandwidth starved and homed in on one number. Again sweet.

Go look at this

u4Ul5PS.jpg


Read parts of AMD white paper, RDNA1 which helps you udnerstand and get back to me, when you can see the whole picture, an extract from AMD white paper below..


zceotEn.png


Local ray uses FP16, go read upon it, I can link you some papers, you ahve no idea what your blabbing about, its embarrassing.
 
Last edited:

Barakov

Member
unknown.png


The official PlayStation website has been updated for the upcoming PS5 games that have been announced so far. Among the listed games is Ratchet and Clank: Rift Apart, which still doesn’t have a release date set for it.
According to the listing on the website, Ratchet and Clank: Rift Apart will run at a dynamic 4K resolution with HDR. There is a performance mode as well that will target 60 FPS.:
  • Stunning visuals: Enhanced lighting and ray tracing make for super sharp visualfidelity. Displayed in crisp, dynamic 4K and HDR*, behold dazzling in-game worlds as you work to save the universe. Enjoy Performance Mode to experience a targeted 60fps frame rate as you encounter new enemies across multiple dimensions.
  • Fast loading: Planet-hop with abandon – near-instant loading via the PS5 system’s SSD sends you hurtling across the galaxy at hyper-speed.
  • Adaptive triggers: Feel unbridled dimensional energy via the DualSense wireless controller, making combat come alive. Each weapon has unique responses as you mow down foes.
  • Haptic feedback: Sense the impact of in-game rumbles and explosions through the DualSense wireless controller’s haptic feedback.
  • Tempest 3D AudioTech: Immerse your ears in 3D spatial environments, enabling you to hear everything above, below and surrounding you, all the while using your favorite pair of headphones. Connect with the sounds of combat and explore in wonder as worlds come to life, enveloping you with high fidelity sound.
Also, could this not release in 2020 as planned ?
unknown.png




ratchet-and-clank-rift-apart-screenshot-01-ps5-en-15jun20
I'm in. Give us a release date.
Also, I'll take a dynamic 4k if it means 60fps. :messenger_ok:
 
If you wait a year, you might as well wait another year or so in hopes of GDDR7, Memory bandwidth is going to be a major limiting factor this gen after all.

And At that point I'm sure there's something new to wait for too. There's always something new.

No, that's different.
One thing is waiting so something that you don't know when will come, another is not waiting for something that you know is comming.
They waited for RDNA2 to get RT, they decided that they shouldn't wait a few months to get Zen 3. They deemed Zen 2 good enough already.
But in the end they may be right, after all, with this amount of GPU you don't need better CPU.
 
Look at this and go read up on CU utilisation, get back to me when we can discuss,

You read somewhere about bandwidth starved and homed in on one number. Again sweet.

Go look at this

u4Ul5PS.jpg


Read parts of AMD white paper, RDNA1 which helps you udnerstand and get back to me, when you can see the whole picture, an extract from AMD white paper below..


zceotEn.png


Local ray uses FP16, go read upon it, I can link you some papers, you ahve no idea what your blabbing about, its embarrassing.

But this will just even the playing field.
While PS5 RT processing may be faster, the SeX can working on more raytraced things simultaneously.
They reason why we are seeing more games with RT working so well is because... Sony wanted to make a point in actually showing the feature. Because Microsoft had no new SeX games to show up to now consequently they couldn't demonstrate how well their machine can do RT, but it can, at the same level as the PS5 at least I believe.
 

DeepEnigma

Gold Member
ITT - New Sony games are super gorgeous at 1440p because of Ray Tracing and 4K doesn't matter.

Also XBox games at 4K are 'ugly' until proven otherwise.
I'm supposed to believe SIE devs are 30% better than XGS devs. That's rich.
It's interesting how the narrative changes in 7 years. In 2013 1080p vs 900/720p was the only discussion. Now it's 1440p/60fps RT vs 4K/60fps no RT? I'm really confused now.

Microsoft games are not going to have dynamic RT, they'll all look ugly? Is that what the new story is?

You keep baiting, but it's not sticking. Holy shit, relax.
 
If it's just "targeting" 60fps then it better be native 4k. I don't see how they could be using any render scaling and still struggling to hit 60fps.


agree. i hate the "targeting" language that's being used. targeting doesn't mean 60fps. 60fps shouldn't be able to be thrown out as a marketing term unless the game can hit it 85%+ (or some HIGH percentage) of the time. any time i hear targeting, it immediately becomes disappointing. i'm sure the game will be great, but i was under the impression that next gen was going to be coming with the thunder. not "targeting", "performance mode", "variable resolution", etc. although i am fine with variable resolutions...
 

pawel86ck

Banned
I did on the previous page.

This was common knowledge months ago so figured everyone here already knew.
There's nothing there about XSX not being able to run such high quality assets. Tim Sweeney only said fast I/O is necessery for what they want to accomplish, but it's only your own interpretation if you think XSX is too slow (raw speed is not everything).

It's possible MS lied people about their velocity architecture, but assuming they didnt XSX I/O is faster than people like you are willing to admit. When you only need to load 2.5x less data into RAM not only you can load the same textures 2.5x faster, but you also have 2.5x more RAM available effectively (without SFS 2.5x gain they would need 33GB RAM to load the same textures).
 
Last edited:

geordiemp

Member
But this will just even the playing field.
While PS5 RT processing may be faster, the SeX can working on more raytraced things simultaneously.
They reason why we are seeing more games with RT working so well is because... Sony wanted to make a point in actually showing the feature. Because Microsoft had no new SeX games to show up to now consequently they couldn't demonstrate how well their machine can do RT, but it can, at the same level as the PS5 at least I believe.

I never said Ps5 is more powerful did I, I said its not as simple as 12 vs 10 and they will be closer than people think as there are some things in XSX favour, some in ps5.

I am sure ps5 will run some stuff better, I am sure XSX will run some stuff better.

Ity the 12 vs 10 powa stuff thats annoying.
 
Last edited:

bitbydeath

Member
There's nothing there about XSX not being able to run such high quality assets. Tim Sweeney only said fast I/O is necessery for what they want to accomplish, but it's only your own interpretation if you think XSX is too slow (raw speed is not everything).

It's possible MS lied people about their velocity architecture, but assuming they didnt XSX I/O is faster than people like you are willing to admit. When you only need to load 2.5x less data into RAM not only you can load the same textures 2.5x faster, but you also have 2.5x more RAM available effectively (without SFS 2.5x gain they would need 33GB RAM to load the same textures).

It mentions its built against PS5’s speed to achieve those graphics and anything less will still work but scale down in quality.
 

geordiemp

Member
ITT - New Sony games are super gorgeous at 1440p because of Ray Tracing and 4K doesn't matter.

Also XBox games at 4K are 'ugly' until proven otherwise.

Is this beautiful ? If it was 8K would it be beaautiful. NO

You can put lipstick ona pig and its still a pig.


X66ultQ.png


I am sure XSx will have much beter looking game for sure, maybe Valhalla will look great, but just going on resolution is a silly metric. Forza will look great and not too sure whats next up.

Pixel quality not quantity.
 
Last edited:
There's nothing there about XSX not being able to run such high quality assets. Tim Sweeney only said fast I/O is necessery for what they want to accomplish, but it's only your own interpretation if you think XSX is too slow (raw speed is not everything).

It's possible MS lied people about their velocity architecture, but assuming they didnt XSX I/O is faster than people like you are willing to admit. When you only need to load 2.5x less data into RAM not only you can load the same textures 2.5x faster, but you also have 2.5x more RAM available effectively (without SFS 2.5x gain they would need 33GB RAM to load the same textures).

The "Velocity Engine" must be the "Industry Standard" which developers will aim for the PC ecosystem. The PS5 speeds are probably "overkill", desirably but not necessary. How slow is too slow anyway?
 
There's nothing there about XSX not being able to run such high quality assets. Tim Sweeney only said fast I/O is necessery for what they want to accomplish, but it's only your own interpretation if you think XSX is too slow (raw speed is not everything).

It's possible MS lied people about their velocity architecture, but assuming they didnt XSX I/O is faster than people like you are willing to admit. When you only need to load 2.5x less data into RAM not only you can load the same textures 2.5x faster, but you also have 2.5x more RAM available effectively (without SFS 2.5x gain they would need 33GB RAM to load the same textures).

Dude, come on. Who in their right mind believes that?
 
Is this beautiful ? If it was 8K would it be beaautiful. NO

You can put lipstick ona pig and its still a pig.


X66ultQ.png


I am sure XSx will have much beter looking game for sure, maybe Valhalla will look great, but just going on resolution is a silly metric. Forza will look great and not too sure whats next up.

Pixel quality not quantity.

Dude, is Halo: Infinite the only thing you have to hold onto?
 

geordiemp

Member
People believe PS5 GPU will sustain 2.2GHz despite the odds, so you may as well believe SFS gains are real.

Sustain is a word I like to see, teh old clock FUD, Ps5, AMD PC parts, everytjhing from RDNA2 will run at 2.2 Ghz except XSX. I said it 6 months ago I bet PC parts come at ps5 like frequencies and hey look...leaks ....

There are some slower AMD parts coming out, at 5700 type frequencies, so we will find out more from AMD later this month, I have my theories but its not long now.

Does it make you wonder ? At least XSX will be super quiet unlike my fucking pro :messenger_beaming:

You believe what you want, the results speak for themselves, everybody has to diseminate the PR and tech for themelves and draw their own conclusions.

Ok, xym or whatever preferred pronoun. Nobody is FUD-ing PS5, it's going to be a great gaming platform. That doesn't mean ya'll get a free pass to shit all over XBox.

If you read my posts, I often say XSX will likely have a slight edge....sometimes depends what the task is...,..., but not 12 vs 10 and not 18 %. That is not shitting, its called equality. XSX is a great system, ps5 is a great system, they are both so similar but do things different so I find it interesting.

They will have different games as well, XSX when it gets Evil within 3 it will be on my list, I love that series - best zombie game IMO

I scoff posters who throw the 12 vs 10 powa angle and think everything will look 18 % better, thats just wrong.
 
Last edited:

MrFunSocks

Banned
All this arguing over ray tracing when the 2 big games that were supposedly graphical showcases for all things PS5 have now been shown to be pretty underwhelming technically. “Targeting”, “up to”, and very little actual ray tracing, and what is there looks extremely low quality (Spider-Man for example).

Sony boys who carried on about ps5 doing ray tracing better than the series x because of the clock speed should now be able to see that lower quality but slightly faster isn’t the magic bullet they were hoping.
 
Last edited:
Top Bottom