Riky
$MSFT
They give a lot of opinions too, all the time, come on. They review the hardware, they review how the games run on their consoles.
They give stats on resolution and framerates, when they saw a machine doesn't change those stats.
They give a lot of opinions too, all the time, come on. They review the hardware, they review how the games run on their consoles.
Not to be rude, but are you not meant to sign nondisclosure and secrecy contracts when working with companies like this?I have worked on realtime game loops for 5yrs now at Lockheed Martin on their missile simulation programs.
Your comment is ridiculous and no real programmer would ever disjoin film graphics (the science of graphics) with realtime graphics programming. It's like saying one of my buddies that worked at Disney and now works at Nvidia on their OptX program has no validity to the knowing what goes on in a realtime graphics pipeline.
I won't even read the rest of your post as I will assume it's just trolling. You can think I have no credible knowledge all you want, but I get plenty of invites from game companies to join their graphics team. They must be stupid.
How naive, they pick and chose what to show all the time, it's very opinion based. They even changed how they did their videos over time to so they would become more opinion based.They give stats on resolution and framerates, when they saw a machine doesn't change those stats.
I can disclose my position and a general practice of what I work on. It's all advertised as public domain on LinkedIn. If it wasn't allowed, LMT would have told me years ago.Not to be rude, but are you not meant to sign nondisclosure and secrecy contracts when working with companies like this?
I mean, is it wise to disclose this type of information on a forum, any forum for that matter?
Or is the work you do not of that sensitive nature?
I just can't believe they didn't do a video on your claim of 30fps games play at 60fps on certain tv's either.
I am not sure it is correct to call this CPU bound just because the GPU is not completely utilised. If additional computational capacity was required, Epic would have utilised multiple cores and not one thread/core. I think it is I/O limited and hence why they see a good correlation with frequency.
If it is I/O limited you would also see quite large differences in PC performance when comparing comparable CPUs in terms if single threaded performance but with large cache size differences. I hope someone does that test.
What? Like, literally what are you event trying to say here?Right, "sony warriors" . What would they gain from it considering that the CPU is clocked lower?
The ironySuch a stupid thing to say from an xbox warrior.
Tell me you have no idea what you're talking about without actually saying it.It's the idea that everything can be done asynchronously and that UE5 is "odd" or "unoptimised" or "single threaded" when pretty much most current RT capable engines would behave the same that you are not understanding.
If Alex knew anything too what he should actually check is higher performance memory.
It doesn't matter how many damn cores you have in most engines. Are you one of those people who buy a threadripper expecting massive gains in your games too?
Everybody knows that there must be some kind of missile simulation thing going on.... it's not like he spat the code and scematics in github or something.Not to be rude, but are you not meant to sign nondisclosure and secrecy contracts when working with companies like this?
It's pretty self explanatory. Why would a "sony warrior" in your eyes advocate an engine requiring higher clock CPU speed over more cores if the CPU clock speed is lower on a PS5?What? Like, literally what are you event trying to say here?
The irony
It's pretty self explanatory. Why would a "sony warrior" in your eyes advocate an engine requiring higher clock CPU speed over more cores if the CPU clock speed is lower on a PS5?
It's you projecting your nonsense here.
Just because I like to clown you lot,"Significantly multithreaded"
Go bench the games you listed on an 10900K. First cut your cores in half (therefore having 10 threads still available) and see what percentage drop in performance you experience. Then cut your clocks in half and tell me what percentage loss in performance you get.
Then kindly get lost.
Who mentioned "100% uplifts" anywhere either?
Why do you dopes always try and drive off the devs and journalists that post here?
The term "highly" threaded means that you are assuming the thread count is more sensitive to FPS than clock frequency. How about comparing that test with decreasing clock freq by 1/2. I'm curious actually. If there is MORE FPS loss than the count reduction, then you can't claim "highly" threaded (i.e. thread count > clock frequency).It's more about sony warriors having a big ol' REEEEEEEEEEEEEEE fest over DF findings.
Just because I like to clown you lot,
16 Threads, e-cores limited
8 threads
Wow, that's quite a lot more than the ~10% difference that alex showed, isn't it. It's... it's almost like he's right
And for fun
4 threads
Because it isn't 2005 any more and games are indeed highly threaded work loads now.
After you
You did, the moment you tried to act as though a demonstration of performance for clocks vs threads was "nonsense" while trying to bring ram into the equation
I'm not claiming that there is a 1 to 1 relation of core count vs performance. No one has in fact. Not me, not Alex.The term "highly" threaded means that you are assuming the thread count is more sensitive to FPS than clock frequency. How about comparing that test with decreasing clock freq by 1/2. I'm curious actually. If there is MORE FPS loss than the count reduction, then you can't claim "highly" threaded (i.e. thread count > clock frequency).
Riiight, a REEE fest over DF findings. What does that even mean?It's more about sony warriors having a big ol' REEEEEEEEEEEEEEE fest over DF findings.
Just because I like to clown you lot,
16 Threads, e-cores limited
8 threads
Wow, that's quite a lot more than the ~10% difference that alex showed, isn't it. It's... it's almost like he's right
And for fun
4 threads
Because it isn't 2005 any more and games are indeed highly threaded work loads now.
After you
You did, the moment you tried to act as though a demonstration of performance for clocks vs threads was "nonsense" while trying to bring ram into the equation
This is what you think a "Sony warrior" REEE fest is? Just stating some facts?It's not a single threaded engine. There are things you can't do asynchronously in a frame so faster always gains you quicker frametimes even if it's completely multithreaded. Faster clocks always gives you faster frametimes if you're CPU bound. More cores rarely does. Only if you don't have enough does it become a problem.
The engine is highly multithreaded. You would have to dissect the specific algorithms at play to see where they implement a complex function without using more cores. We would have to dissect the Nanite code and the Lumen code to find out where the bottleneck is (if there is one). I can only guess but perhaps they had to use a single thread for Lumen for it's specific algorithm and haven't added multithreading support yet. Or something in their algorithm HAS to be single threaded (i.e. reading in a large packet of data that must be read serially).I'm not claiming that there is a 1 to 1 relation of core count vs performance. No one has in fact. Not me, not Alex.
The problem is the 4% delta in UE5 demo.
4% lower performance for 50% drop in cores available shows a clear lack of multi threaded optimisation. The End.
Yea, and 5 cores is more than enough for the poorly multi threaded demo.Only if you don't have enough does it become a problem.
I have a problem with this statement as you have no idea how these algorithms work in code for you to say it's poorly multithreaded. There are many things in a pipeline that can't be multithreaded. Just because you can use multiple cores to solve a problem doesn't mean it can be applied to ANY problem and in that situation clock frequency is the only way to gain more performance.Yea, and 5 cores is more than enough for the poorly multi threaded demo.
I have a problem with this statement as you have no idea how these algorithms work in code for you to say it's poorly multithreaded
^50% less threads, only 4% less performance
I care. Because you could be stating a problem without understanding how algorithms work. There are many armchair programmers on these boards that make ridiculous claims as if they are supervisors with years of experience to claim something is "unoptimized" when in fact, it is not. It could simply be that the algorithm is too expensive for today's hardware - case in point - RT GI. The technique is simply more expensive than GI light probes. There is no amount of "optimization" on a low-end hardware configuration like the consoles that will remedy the expense. It's simply too expensive for said hardware and we need to wait for more powerful hardware.^
No one cares the exact cause of the problem. It is still a problem.
Anything that leaves hardware resources on the table while also having limited performance is, by definition, unoptimised.I care. Because you could be stating a problem without understanding how algorithms work. There are many armchair programmers on these boards that make ridiculous claims as if they are supervisors with years of experience to claim something is "unoptimized" when in fact, it is not. It could simply be that the algorithm is too expensive for today's hardware.
Yea, and 5 cores is more than enough for the poorly multi threaded demo.
And why would I test clocks? Do you need someone to show the obvious that clocks do affect performance? Who said otherwise.
Here, I'll type it out in big letters so you might have a chance of understanding the problem.
50% less threads, only 4% less performance
There, do you understand the problem now? Cool.
I've clearly showed direct evidence from a system running today that it is not normal for there to be a 4% difference when dropping core counts by half in current year where every current gen console has more than that and PCs regularly have 8+ coresWhy would you test it? Because you are making claims like this
"UE5 is evidently poorly threaded as per Alex' testing where lower core counts made little difference but decreasing clocks made a significant difference."
When that's actually normal
Thanks for the link, I just read it. Dictator is either missing my point or intentionally stating the incorrect assumption. I clearly state that the game is CPU bound and mostly cache/data related and his comment is focussing on the SSD and memory allocation/size. When I clearly state in the video the size of assets and throughput is NOT the issue here, missed data and stalling is.
Schools out it seems lol
Zen 2 wasn't good when it launched?I've clearly showed direct evidence from a system running today that it is not normal for there to be a 4% difference when dropping core counts by half in current year where every current gen console has more than that and PCs regularly have 8+ cores
And no one cares about your shitty 3950x which wasn't even good when it launched. All that showing it's crappy benchmarks do is tell us AMD tried to throw cores at their lack of performance problem and that 8-10 cores is around the limit for current engines for performance. I do find it rather adorable that you scoured the internet for hours to find one crappy benchmark to try and back up your warrior bullshit though.
And guess what, UE5 demo is still poorly optimised just as Alex showed. The End.
I find it hard to believe that with your software engineering experience you don't think running software in a debug/dev build has overhead?It does when you turn on the stats, this can impact performance a great deal but only when on. During testing I have this disabled, but I will build a release version to see if that is also affecting it.
It’s so obvious. As soon as I/o on consoles is mentioned and in this context freeing up cpu resources, deliberate misunderstanding takes place.Thanks for the link, I just read it. Dictator is either missing my point or intentionally stating the incorrect assumption. I clearly state that the game is CPU bound and mostly cache/data related and his comment is focussing on the SSD and memory allocation/size. When I clearly state in the video the size of assets and throughput is NOT the issue here, missed data and stalling is.
Schools out it seems lol
So he has nothing at all to do with programing and game making. That would explain why he's been a laughinstock of this domain for years and years. And why on beyond3d they actually opened a thread dedicated to his mistakes that are present in every video because they got tired of people face palming on his every video and infecting every thread with how wrong he is with absolutely every video he ever makes. Including the video this thread is about being disproven by Alex and Epic devs
Don't know if someone already knew it or if it works with XSX or PS4 and other consoles, but there is a "trick" to play all your games "at 60 FPS", at least on ps5, with a new Bravia TV . Keep in mind I'm not english so my translation of the menu setting could be completely wrong. It's very simple: first make you sure the hdmi enhanced setting is not settled on VRR mode because it forces the TV to the game mode and such setting not support such "trick". Go to the picture/image setting menu in your Bravia (again I don't know how it's named in english ); active and set everything the higher possible in the motionflow option and do the same for the movie/film mode voice. Via interpolation now all your games will runs at 60 FPS.
“It just does not shift from 60 frames per second on PlayStation 5. It’s just flawless, it’s brilliant.”
- Richard Leadbetter, Digital Foundry
I never said they could test it, you are the one trying to get me on a technicality or trying to restrict what access can mean. They were given privileged information while visiting MS and then made a video speculating about something they already knew about and even got the specs right (when there were multiple different rumors going around).
4TF seemed unbelievably low at that time for a nextgen console, so I think it's pretty acceptable to assume they were given that information too. You seem 100% sure of everything they were told or all they talked about during that secret trip, I'm not.
They keep pushing the Series S as brittliantly designed at $300, that doesn't sound right at all to me and never will. Sounds like straight up shilling.
You spent over 21 mins ranting about PS5's super fast SSD and I/O.
Clearly the people who engineered UE5 have no idea what they created and you do.
But don't let me stop you. You got a huge sony fan base who thinks regurgitating technical jargon means you actually know what they mean.
I just found it rather surprising that's all, my father worked for the ministry of defense in the 60s, among other things, on the engines of the Harrier jump jet.Everybody knows that there must be some kind of missile simulation thing going on.... it's not like he spat the code and scematics in github or something.
I have a friend who did some path finding programming on military HW 20 years ago (not in the US). This is literally all I know about his work, all he could ever say.
"My 3950x" what are you 12yrs old? I thought you said I had a core 2 duo anyway?I've clearly showed direct evidence from a system running today that it is not normal for there to be a 4% difference when dropping core counts by half in current year where every current gen console has more than that and PCs regularly have 8+ cores
And no one cares about your shitty 3950x which wasn't even good when it launched. All that showing it's crappy benchmarks do is tell us AMD tried to throw cores at their lack of performance problem and that 8-10 cores is around the limit for current engines for performance. I do find it rather adorable that you scoured the internet for hours to find one crappy benchmark to try and back up your warrior bullshit though.
And guess what, UE5 demo is still poorly optimised just as Alex showed. The End.
"My 3950x" what are you 12yrs old? I thought you said I had a core 2 duo anyway?
Oh, you glossed over the part where I said 8-10 cores is the limit for current games and engines? Of course you did.So you're telling me that more cores didn't give AMD better performance in the land of "significantly multithreaded" engines? Wow who would have thought. Maybe they listened to your flawed logic.
Figure of speech my ass. You are so immature that you think it's all about the size of your e-penis in conversations instead of valid arguments. hence your silly call of "Sony warriors", assuming I have a "Core 2 Duo", and now "my shitty 3950x" when being shown that the result of having your cores halved in a 10 core 20 thread CPU would result in single figure percentage drops compared to halving your clocks which would result in far bigger drops as expected as long as you have enough.A basic figure of speech is apparently too high a concept for you to grasp, can't say I'm surprised.
Oh, you glossed over the part where I said 8-10 cores is the limit for current games and engines? Of course you did.
Figure of speech my ass. You are so immature that you think it's all about the size of your e-penis in conversations instead of valid arguments. hence your silly call of "Sony warriors", assuming I have a "Core 2 Duo", and now "my shitty 3950x" when being shown that the result of having your cores halved in a 10 core 20 thread CPU would result in single figure percentage drops compared to halving your clocks which would result in far bigger drops as expected as long as you have enough.
You were blatantly saying low clocks affecting performance more than lower cores is a sign of being "poorly multithreaded". I tell you that more cores rarely give you significant performance boosts and a threadripper would give you no gain compared to higher clocks and you point to "significantly multithreaded engines like CP2077". I tell you to test the two options on those engines. Halve clocks then halve cores to 10 thread. You go as low as 2 cores (4 threads) and don't even do the other half of the test. Half clocks.
low and behold you would get no performance boost from having a threadripper and now you're trying to pretend that you knew all along that "8 cores was a limit" and goi g from 12 cores to 6 with a 8% drop is significant and highly multithreaded.
When being told that other factors like memory performance can change this percentage in fps because the cores themselves can still idle (see Hoddi's histogram) you call that irrelevant. You would rather talk complete shit and engage in epenis swinging than bring a valid point.
yes, zen 2 was still not good enoughZen 2 wasn't good when it launched?
Just because the 9900k was better than zen 2 doesn't mean the latter wasn't good considering how much cheaper it was.
You really are especially clueless aren't you?Figure of speech my ass
He's right though and you are backed into a corner it seems. I also asked you about comparing the clock frequency as a regular programmer would do such a test and you brushed it off.You really are especially clueless aren't you?
I get it, you thought you finally had a DF gatcha moment, ended up getting clowned instead and now you're upset.
Not sure, but I think they did say they are aiming for 60 with the nextgen fruit. So with hope what we are seeing is far from complete. Someone may correct or verifyBased on these tech videos I feel like Epic has over-promised and under-delivered with this engine. If this had happened at the start of the generation it wouldn’t be so bad but after 1.5 years of 60fps games on console it’ll be painful to go back to 30fps. And if we can’t even get 60fps on PC, why would we want this engine to be used?