• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Files Patent for Hybrid Ray Tracing Solution

thelastword

Banned
"It's nearly impossible to discuss graphics tech in 2019 without bringing up real-time ray tracing. The rendering technique has been popularized by Nvidia, Microsoft, and an increasing number of game developers over the last few months. AMD's stayed pretty quiet about how it plans to support hardware-accelerated ray tracing, but a patent application published on June 27 offered a glimpse at what it's been working on.

AMD filed the patent application with the U.S. Patent and Trademark Office (USPTO) in December 2017. It describes a hybrid system that enables real-time ray tracing using a variety of software and hardware methods rather than relying on just one solution. The company said this approach should allow it to overcome the shortcomings associated with previous attempts to bring ray tracing to the masses.

In the application, AMD said that software-based solutions "are very power intensive and difficult to scale to higher performance levels without expending significant die area." It also said that enabling ray tracing via software "can reduce performance substantially over what is theoretically possible" because they "suffer drastically from the execution divergence of bounded volume hierarchy traversal."

Basically: using software to enable ray tracing on hardware that hasn't been optimized for the rendering technique requires a significant performance sacrifice. Most people don't like it when their hardware is hamstrung by software, even if it's supposed to enable some fancy new graphics, and the inability to handle other processing tasks at the same time can also make the graphics look worse anyway.

AMD didn't think hardware-based ray tracing was the answer either. The company said those solutions "suffer from a lack of programmer flexibility as the ray tracing pipeline is fixed to a given hardware configuration," are "generally fairly area inefficient since they must keep large buffers of ray data to reorder memory transactions to achieve peak performance," and are more complex than other GPUs.

So the company developed its hybrid solution. The setup described in this patent application uses a mix of dedicated hardware and existing shader units working in conjunction with software to enable real-time ray tracing without the drawbacks of the methods described above. Here's the company's explanation for how this system might work, as spotted by "noiserr" on the AMD subreddit:

It's worth noting that this application was filed a year-and-a-half ago; AMD might have developed a new ray tracing system in the interim. But right now it seems like the company doesn't want to go the exact same route as Nvidia, which included dedicated ray tracing cores in Turing-based GPUs, and would rather use a mix of dedicated and non-dedicated hardware to give devs more flexibility."





https://www.tomshardware.com/news/amd-patents-hybrid-ray-tracing-solution,39761.html
 

Ascend

Member
Sounds like a good idea. Hopefully, nVidia's RTX implementation won't gimp AMD like GameWorks did.
 

SonGoku

Member
I've found these bits very interesting:
AMD said that software-based solutions "are very power intensive and difficult to scale to higher performance levels without expending significant die area." It also said that enabling ray tracing via software "can reduce performance substantially over what is theoretically possible" because they "suffer drastically from the execution divergence of bounded volume hierarchy traversal."
AMD didn't think hardware-based ray tracing was the answer either. The company said those solutions "suffer from a lack of programmer flexibility as the ray tracing pipeline is fixed to a given hardware configuration," are "generally fairly area inefficient since they must keep large buffers of ray data to reorder memory transactions to achieve peak performance," and are more complex than other GPUs.
So the company developed its hybrid solution. The setup described in this patent application uses a mix of dedicated hardware and existing shader units working in conjunction with software to enable real-time ray tracing without the drawbacks of the methods described above.

This pretty much confirms the RT solution consoles are going with and by extent RDNA2.
But right now it seems like the company doesn't want to go the exact same route as Nvidia, which included dedicated ray tracing cores in Turing-based GPUs, and would rather use a mix of dedicated and non-dedicated hardware to give devs more flexibility.
 

thelastword

Banned
I've found these bits very interesting:




This pretty much confirms the RT solution consoles are going with and by extent RDNA2.
Yes good, I was going to highlight some of these bits, but I wanted people to read the whole article because it's really good.
 

llien

Member
And people think NVs solution is not hybrid because?

Sounds like a good idea. Hopefully, nVidia's RTX implementation won't gimp AMD like GameWorks did.
API was approved by Microsoft (and is part of DirectX stack) I don't see how NV could cripple it.
 

phil_t98

Gold Member
"It's nearly impossible to discuss graphics tech in 2019 without bringing up real-time ray tracing. The rendering technique has been popularized by Nvidia, Microsoft, and an increasing number of game developers over the last few months. AMD's stayed pretty quiet about how it plans to support hardware-accelerated ray tracing, but a patent application published on June 27 offered a glimpse at what it's been working on.

AMD filed the patent application with the U.S. Patent and Trademark Office (USPTO) in December 2017. It describes a hybrid system that enables real-time ray tracing using a variety of software and hardware methods rather than relying on just one solution. The company said this approach should allow it to overcome the shortcomings associated with previous attempts to bring ray tracing to the masses.

In the application, AMD said that software-based solutions "are very power intensive and difficult to scale to higher performance levels without expending significant die area." It also said that enabling ray tracing via software "can reduce performance substantially over what is theoretically possible" because they "suffer drastically from the execution divergence of bounded volume hierarchy traversal."

Basically: using software to enable ray tracing on hardware that hasn't been optimized for the rendering technique requires a significant performance sacrifice. Most people don't like it when their hardware is hamstrung by software, even if it's supposed to enable some fancy new graphics, and the inability to handle other processing tasks at the same time can also make the graphics look worse anyway.

AMD didn't think hardware-based ray tracing was the answer either. The company said those solutions "suffer from a lack of programmer flexibility as the ray tracing pipeline is fixed to a given hardware configuration," are "generally fairly area inefficient since they must keep large buffers of ray data to reorder memory transactions to achieve peak performance," and are more complex than other GPUs.

So the company developed its hybrid solution. The setup described in this patent application uses a mix of dedicated hardware and existing shader units working in conjunction with software to enable real-time ray tracing without the drawbacks of the methods described above. Here's the company's explanation for how this system might work, as spotted by "noiserr" on the AMD subreddit:

It's worth noting that this application was filed a year-and-a-half ago; AMD might have developed a new ray tracing system in the interim. But right now it seems like the company doesn't want to go the exact same route as Nvidia, which included dedicated ray tracing cores in Turing-based GPUs, and would rather use a mix of dedicated and non-dedicated hardware to give devs more flexibility."





https://www.tomshardware.com/news/amd-patents-hybrid-ray-tracing-solution,39761.html

Which ever console gets Ray tracing right will trounce the other. If AMD have found a good soloution we could all benifit and a huge leap forward for next gen and next gen games. Exciting times ahead eh?
 

CrustyBritches

Gold Member
Welcome to last week...
 
The same way that Tessellation was API approved and still allowed nVidia to gimp AMD and their own older cards.
Nvidia didnt gimp any cards performance, but they made their nvidia sponsored titles rely too heavily on tess so that amds cards, which were not as proficient at it suffered.

In other words blame developers for taking that check.
 

Ascend

Member
Nvidia didnt gimp any cards performance, but they made their nvidia sponsored titles rely too heavily on tess so that amds cards, which were not as proficient at it suffered.
That's exactly what gimping is.

In other words blame developers for taking that check.
Not really. Developers received black box DLLs which they couldn't modify, or they had to pay extra to get access to the source code, and any change in the code required approval by nVidia before being shipped.
 

llien

Member
What do you mean?If it's a standard, that mean companies have to build their cards around it.
Using tesselation where not needed which artificially crippled opponent's cards.

Gameworks, of course, was a whole different level:
AMD's chief gaming scientist Richard Huddy told Ars Technica. "Around two months before release, or thereabouts, the GameWorks code arrived with HairWorks, and it completely sabotaged our performance as far as we're concerned. We were running well before that... it's wrecked our performance, almost as if it was put in to achieve that goal."
 
That's exactly what gimping is.

Not really. Developers received black box DLLs which they couldn't modify, or they had to pay extra to get access to the source code, and any change in the code required approval by nVidia before being shipped.
What are you talking about? If it's a gameworks title then those devs are getting a check from green team.
 

Ascend

Member
What are you talking about? If it's a gameworks title then those devs are getting a check from green team.
Sure. And if they wanted access to the DLL source code, they would have to either pay nVidia for the license, or it would be deduced from the check you mention. After GameWorks was practically dead, they decided to make the source code public.

Why do you think many of the games that were the worst optimized happened to used GameWorks? Because developers had no access to the source code and thus couldn't optimize. It was so bad that this happened;


But let's try and return this thread to ray tracing and AMD's implementation.
 

Soltype

Member
I'll give you gameworks, since it's exclusive, but tesselation is a hardware feature on both cards.
 

lukilladog

Member
Which ever console gets Ray tracing right will trounce the other. If AMD have found a good soloution we could all benifit and a huge leap forward for next gen and next gen games. Exciting times ahead eh?

There is still plenty of room to offer a leap forward in graphics with shaded graphics, it makes no sense to think about replacing techniques when the hardware can´t even exploit the full potential of the existing ones.

 
Sure. And if they wanted access to the DLL source code, they would have to either pay nVidia for the license, or it would be deduced from the check you mention. After GameWorks was practically dead, they decided to make the source code public.

Why do you think many of the games that were the worst optimized happened to used GameWorks? Because developers had no access to the source code and thus couldn't optimize. It was so bad that this happened;


But let's try and return this thread to ray tracing and AMD's implementation.
I suppose I dont know what you're saying. Are devs being forced to use this code?
 

vpance

Member
There is still plenty of room to offer a leap forward in graphics with shaded graphics, it makes no sense to think about replacing techniques when the hardware can´t even exploit the full potential of the existing ones.


Heretic demo looked really nice. I wonder what spec PC it ran on.

Edit: Found out. Demo is 1440p but runs at 1080p30 on a 1080ti
 
Last edited:

SonGoku

Member
Sounds like a good idea. Hopefully, nVidia's RTX implementation won't gimp AMD like GameWorks did.
Don't worry, if next gen consoles support it (which is why is in AMDs best interests both are rdna2) devs will optimize around amd hw. Similar to how gcn had the edge over kepler
There is still plenty of room to offer a leap forward in graphics with shaded graphics, it makes no sense to think about replacing techniques when the hardware can´t even exploit the full potential of the existing ones.
It will be a mix of both next gen, favoring heavily rasterization, the hybrid aproach makes sense for reflection where it takes alot of dev time to fake impressive results (see Spidey puddle gate)
And people think NVs solution is not hybrid because?
What nvidia does with dedicated hw AMD does with a mix of dedicated hw and shaders
To quote G gofreak more knowledgeable answer:
At a high level, the biggest difference there seems to be that BVH traversal is handled by the shader. In the nVidia RT core model, the RT core handles traversal autonomously.

The AMD model in that patent tasks the shader with BVH traversal, but node evaluation with intersection tests is offloaded to fixed function hardware. After a node evaluation it returns the result to the shader and the shader must tell it what to do next.

On a lower level, the AMD hardware is suggested to be integrated tightly with texture units. I'm not sure how that compares to the RT core - in logical diagrams they live beside texture units, so I'm guessing the RT cores probably also piggyback on texture units for memory access, but it's not totally clear.

What that would boil down to vs nVidia's approach is:

Probably smaller die cost.
Probably lower performance multiplier for ray tracing (at least in apples to apples cases).
Greater programmer control over traversal. Under this model the programmer could use whatever bvh type is preferred, whereas I guess it is fixed in the RT core model. And some bvh types might suit some scenes better than others. This was one complaint I saw a couple of developers on twitter voice around the RTX model, that it was a total black box, with zero control over the data structures in use.

What it means for absolute performance vs no acceleration is anyone's guess - but presumably it's some degree better, and hopefully enough of an improvement to unlock it as feasibly useful in realtime/games.

Of course, as always with patents, they're only an insight into what a company was working on at some point - an actual product might turn out to be completely different. This might not be exactly what AMD's RT solution for RDNA is, but it's a possible one.
 
Last edited:

thelastword

Banned
Which ever console gets Ray tracing right will trounce the other. If AMD have found a good soloution we could all benifit and a huge leap forward for next gen and next gen games. Exciting times ahead eh?
Well, I'll keep my money staked on the company who has devoted more years into researching Raytracing, who has recognized that there needs to be more R&D involved to make the feature viable to all.....Pushing the proprietary hardware angle and even limiting devs to that, and not giving them any flexibility there is surely problematic...….It's the reason why AMD has maintained it's direction on compute and GPGPU.....It's the way forward to address many of our bottlenecks in GPU rendering, eliminate idle time/cycles on the GPU and maximize the hardware for all tasks it can muster for more efficiency ang higher performance.....

The word is performance, and also to maintain current standards...How do you do that? AMD's suggestion/patent here is one.....WE cannot butcher our resolution and fps gains on the coattails of a newly implemented graphical feature....Asking me to sacrifice all the rez and perf for raytracing is a no-no, especially when GPU's have become exponentially more expensive....Dedicate more time and money into wroughting out a solution that can give us raytracing without taking us two steps back, literally....


Remember, GPUOPEN and Radeon Rays have been a thing way before 2016, AMD published a video in 2016...….Even then and especially now, there's a concerted effort by some of the brightest minds in our industry to bring this feature to the mainstream...….Poliphony Digital, Naughty Dog, Santa Monica, Crytek, Dice..etc….All these guys are currently oboard contributing to Radeon Rays, I mean where else are they going to invest in, next gen consoles will be all AMD hardware......Nvidia's approach is just too preliminary for the price being asked imo….Just like AMD broke the boundaries of limited number of cores on CPU's, they look poised to break the boundaries on limited cores on GPU's, there's no way you can get all the GPU processing power you need for raytracing by chasing monolithic dies, you need to break that barrier with Infinity Fabric or multicore GPU's.....It's the only way to maintang current 4k rez standards and maintain or improve framerates whilst also trying to take such an important GPU feature to the masses......You must not just do it, you have to do it well...


There is still plenty of room to offer a leap forward in graphics with shaded graphics, it makes no sense to think about replacing techniques when the hardware can´t even exploit the full potential of the existing ones.

That's like saying, we never fully utilized or maxed out PS3 hardware, lets not do PS4 or 5.....Or, we never fully utilized the 4 core i7 6700k, lets never introduce 8 core or 16 core chips and leverage much more cpu performance across a multitude of cores, just that we have change our programming a bit to spread our processing over more cores to realize much more performance with much less stuttering etc....

The truth is, in your example and I agree with that part, shaded graphics still has lots of room for improvement.....Yet voxel technology is just as viable a technology to pursue, as we see in Dreams and Resogun, 3D Dot Game heroes was just as instrumental in that charge on a PS3 (wonderful game btw)…...

In truth, there is always room for improvement in any technology; rasterization, Anti-aliasing, Filtering? Don't you think eventually, we will get something better than 16x Anisoptric Filtering? It's really where the focus is, who decides to tackle a problem and do some research or rather, who decides to improve a feature people deem as standard.......That's how technology works....

Don't worry, if next gen consoles support it (which is why is in AMDs best interests both are rdna2) devs will optimize around amd hw. Similar to how gcn had the edge over kepler

@gofreak
On a lower level, the AMD hardware is suggested to be integrated tightly with texture units. I'm not sure how that compares to the RT core - in logical diagrams they live beside texture units, so I'm guessing the RT cores probably also piggyback on texture units for memory access, but it's not totally clear.
Yes, next gen consoles will determine the technology used by most devs in developing their games........AMD did a genius thing by developing great APU technology and grabbing the console market and now they are combining desktop class CPU performance with significantly more cores and high end GPU performance on consoles....(My take is that one of the console will have an IF'd GPU, most probably PS5) on 7nm EUV.....

In any case, the point is, that AMD will also gain lots of PC marketshare, by putting such high end and evolved technology on consoles at that pricepoint ($4-500).........The developers devving for consoles will in essence also be developing for PC, but since it's AMD hardware, games will run better on AMD decked PC's over the competition just the same......So if Dice is creating a 128 man Battlefield on an 8 core 16 thread console CPU (that's a ryzen 3000 cpu), Yet finally, you can be sure that such CPU technology will finally be put through it's paces.....Then what happens? The guy sporting a 3700, 3800, 3900 class AMD CPU with a Navi GPU wins, because that's the technology most devs will be focused on and using, including developing raytracing solutions for said hardware too...….This is how AMD has strategized to take over the PC gaming market, by not stagnating technology at a high price, but by giving consoles priced at $400-500 very good kit, where the hardware in consoles is also synonymous with PC hardware as you scale up in price and perf......


As for what Gofreak said, yes.......Take this example, you realize that in order to get any type of decent performance on an RTX 2060 for raytracing, you have to butcher texture resolution, lower graphical settings and only use RTX on low...….Now, not all games will offer a low option like BFV, so what happens then.?...NV's solution is just not ideal...….Look at upcoming games like control, that is going to butcher these RTX cards something fierce.....Remedy does not play with their lighting tech...…..These guys have no issue rendering games at 540p to push technology, see Alan Wake on 360 or Quantum Break on XBONES? I shudder at the resolutions you will play control at on these RTX cards.....
 

lukilladog

Member
That's like saying, we never fully utilized or maxed out PS3 hardware, lets not do PS4 or 5.....Or, we never fully utilized the 4 core i7 6700k, lets never introduce 8 core or 16 core chips and leverage much more cpu performance across a multitude of cores, just that we have change our programming a bit to spread our processing over more cores to realize much more performance with much less stuttering etc....

The truth is, in your example and I agree with that part, shaded graphics still has lots of room for improvement.....Yet voxel technology is just as viable a technology to pursue, as we see in Dreams and Resogun, 3D Dot Game heroes was just as instrumental in that charge on a PS3 (wonderful game btw)…...

In truth, there is always room for improvement in any technology; rasterization, Anti-aliasing, Filtering? Don't you think eventually, we will get something better than 16x Anisoptric Filtering? It's really where the focus is, who decides to tackle a problem and do some research or rather, who decides to improve a feature people deem as standard.......That's how technology works....

Ps3 was the radical technology not the other way around, the 8/16 core processors are lineal evolution since devs were making games with dozens of threads long before that. And I´m not sure what are you trying to say about technology but there are not many things more utilitarian than tech, if it´s not practical or falls on the realm of diminishing returns, it gets put aside... it happened to ray tracing for decades in games and some reflections here and there are not gonna change much.

Ps.- There is no need for higher than 16af, diminishing returns and I´m not sure it´s even possible.
 
Last edited:

pawel86ck

Banned
I'm not so sure if Hybrid RT is a good idea, because If developers will be forced to use shaders for RT calculations, then shader performance will be hammered as a result. On the other hand RT cores offers very little performance penalty as long RT cores arnt a big bottleneck



Of course that's 1080p, but just imagine guys what will happen in the future when RT cores will be able to deliver 4K 60fps (instead of 1080p 60fps), then developers will be able to deliver RT without perfromance penalty🙂.
 

llien

Member
I'm not so sure if Hybrid RT is a good idea, because If developers will be forced to use shaders for RT calculations, then shader performance will be hammered as a result. On the other hand RT cores offers very little performance penalty as long RT cores arnt a big bottleneck
That's not how it would work.
Instead of big RT cores that do more, you'd get simpler RT cores + more shaders and, as a bonus, flexibility for developers to use whichever BHV structure that they want.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
I'm not so sure if Hybrid RT is a good idea, because If developers will be forced to use shaders for RT calculations, then shader performance will be hammered as a result. On the other hand RT cores offers very little performance penalty as long RT cores arnt a big bottleneck



Of course that's 1080p, but just imagine guys what will happen in the future when RT cores will be able to deliver 4K 60fps (instead of 1080p 60fps), then developers will be able to deliver RT without perfromance penalty🙂.

That would be for full fixed function HW RT vs fully software based RT. Like unified shaders vs separate VS and PS, I think the way forward will be hybrid RT if AMD can do it right as you gain a lot of performance without sacrificing too much flexibility. Especially good for a console environment,
 
Last edited:
That's not how it would work.
Instead of big RT cores that do more, you'd get simpler RT cores + more shaders and, as a bonus, flexibility for developers to use whichever BHV structure that they want.
indeed, depending on how it is designed perhaps even most of the chip could be handling ray tracing through combined fixed h/w and shaders, which could theoretically yield higher ray tracing performance than a smaller fully fixed h/w solution.
 
Last edited:

thelastword

Banned
I remember when hybrid/shader RT was poo poo'd by folks back 2 weeks ago. Is it cool now?
What you fail to realize is that no one is shitting on current RTX just for shit and giggles, but because the performance and rez hit is just too much for the asking price... There must be another way, where rez and frames is not butchered as such.....

So what we are looking for are better solutions. People keep saying that hardware RT is important and no one has denied that, yet the problem is, it is not the entire solution...... Nvidia's RTX cards have a hardware solution, but the performance and rez is still massacred whilst the solution is still hybrid regardless, so at this point, hardware only RT is not the way to go, our GPU's are not powerful enough, moreso, RT hardware on NV cards are not powerful enough......

The only way we can get the power we need for RT in real time is to go the multi-gpu route.....which Nvidia is not ready for....For hybrid RT to work well, a combo of accessible software algorithms+hardware is key.... As RT evolves, even from hybrid, when we attempt to make all aspects of the graphics pipeline raytracing, not just reflections or shadows or lighting in isolation as is done now, you will need infinity fabric GPU's to accomplish such feats with aplomb.......
 

Ascend

Member
I'll give you gameworks, since it's exclusive, but tesselation is a hardware feature on both cards.
Hairworks, a main gameworks 'feature' is based purely on tessellation....

Remember, GPUOPEN and Radeon Rays have been a thing way before 2016, AMD published a video in 2016...….Even then and especially now, there's a concerted effort by some of the brightest minds in our industry to bring this feature to the mainstream...….Poliphony Digital, Naughty Dog, Santa Monica, Crytek, Dice..etc….All these guys are currently oboard contributing to Radeon Rays, I mean where else are they going to invest in, next gen consoles will be all AMD hardware......Nvidia's approach is just too preliminary for the price being asked imo….Just like AMD broke the boundaries of limited number of cores on CPU's, they look poised to break the boundaries on limited cores on GPU's, there's no way you can get all the GPU processing power you need for raytracing by chasing monolithic dies, you need to break that barrier with Infinity Fabric or multicore GPU's.....It's the only way to maintang current 4k rez standards and maintain or improve framerates whilst also trying to take such an important GPU feature to the masses......You must not just do it, you have to do it well...
That's interesting... A separate 'chiplet' specifically for RT? Hm.....
 

pawel86ck

Banned
indeed, depending on how it is designed perhaps even most of the chip could be handling ray tracing through combined fixed h/w and shaders, which could theoretically yield higher ray tracing performance than a smaller fully fixed h/w solution.
By doing that developers would sacrifice TFLOPS on raytracing alone. You want to see PS4 graphic fidelity on PS5 just with raytracing?
 
Last edited:

Soltype

Member
Hairworks, a main gameworks 'feature' is based purely on tessellation....


That's interesting... A separate 'chiplet' specifically for RT? Hm.....
Hairworks is exclusive though, so it doesn't matter.Im talking about regular open source tesselation, like crysis 2. Y'all should be mad at AMD for not having better tesselation support.In the games where it's available to both companies they still lag behind.
 
Last edited:

SonGoku

Member
By doing that developers would sacrifice TFLOPS on raytracing alone. You want to see PS4 graphic fidelity on PS5 just with raytracing?
The hw pipeline will be optimized for efficient RT, devs also wont go crazy with RT, they'll use it sparingly for reflections and such maybe there'll even be effects where RT is more efficient to produce the same result.
 

Panajev2001a

GAF's Pleasant Genius
By doing that developers would sacrifice TFLOPS on raytracing alone. You want to see PS4 graphic fidelity on PS5 just with raytracing?

4K and maybe even 60 FPS with much higher resolution textures, improved geometric details, and Ray traces reflections, shadow, lighting for convincing dynamic GI... yes please ;). Future stills seems unified shaders (with RT enhancements) rather than fixed function HW... just IMHO.
 

Ascend

Member
Hairworks is exclusive though, so it doesn't matter.Im talking about regular open source tesselation, like crysis 2. Y'all should be mad at AMD for not having better tesselation support.In the games where it's available to both companies they still lag behind.
Let's just leave it where it's at. Because you're either not getting it, or you're being deliberately obtuse, and it's doing nothing more than derailing the thread.
 

lukilladog

Member
...It will be a mix of both next gen, favoring heavily rasterization, the hybrid aproach makes sense for reflection where it takes alot of dev time to fake impressive results (see Spidey puddle gate)

I´d rather see shading continue it´s evolution unrestrained because eventually it will deliver what we can see in the videos I posted above (which don´t use RT), RT would hold that back badly. I get that tech has to progress but when this tech isn´t cheaper, when it isn´t faster, and all it comes to offer are diminishing returns... it becomes a waste of resources 🙉
 

Soltype

Member
Let's just leave it where it's at. Because you're either not getting it, or you're being deliberately obtuse, and it's doing nothing more than derailing the thread.
Maybe I'm not seeing where you're coming from, but we'll leave it at that.
 

thelastword

Banned
PS5 Uses AMD New Hybrid Ray Tracing



Also, perfect time for Sony to pick up Remedy after they claimed their Alan Wake IP....
 

LordOfChaos

Member

Mass Shift

Member
PS5 Uses AMD New Hybrid Ray Tracing



Also, perfect time for Sony to pick up Remedy after they claimed their Alan Wake IP....

Again, Remedy already owned the IP. MS relinquished the publishing rights of the game to them.

As for Sony picking up Remedy. Look, I'm not bashing the franchise or Quantum Break. They weren't bad but they weren't great games either. Certainly not on the same level as TLOU. AW was a fuller experience but QB should have never sold for more than $30. You can get through the game in about 7hrs if weren't for the 3hrs of narrative cut scenes that you don't participate in like Detroit for example where you at least had dialogue choices.

Anyway, I know there's already a thread for this so I won't go on. Just play these games on PC before you start asking Sony to acquire this studio.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
That's not how it would work.
Instead of big RT cores that do more, you'd get simpler RT cores + more shaders and, as a bonus, flexibility for developers to use whichever BHV structure that they want.
Developers will not want 'whichever' BVH -- there are only so many BHV building approaches that run well on GPUs (state of the art). IOW, RTX building the accelerator structures as a 'black box' function (not necessarily in hw but in a fully turn-key manner) was done on purpose. What AMD's approach brings to the table, though, is the use of 'special-case' accelerators that have little or nothing to do with BVH -- like grids or octrees, which you currently have absolutely no way of using with RTX.
 

thelastword

Banned
Again, Remedy already owned the IP. MS relinquished the publishing rights of the game to them.

As for Sony picking up Remedy. Look, I'm not bashing the franchise or Quantum Break. They weren't bad but they weren't great games either. Certainly not on the same level as TLOU. AW was a fuller experience but QB should have never sold for more than $30. You can get through the game in about 7hrs if weren't for the 3hrs of narrative cut scenes that you don't participate in like Detroit for example where you at least had dialogue choices.

Anyway, I know there's already a thread for this so I won't go on. Just play these games on PC before you start asking Sony to acquire this studio.
Hey Mass, I have these games on PC and I agree that they are not the best games ever, but they are pretty decent...I personally like TPS games......We must remember though that Remedy also did Max Payne 1+2.....So in the right hands and under good directorship from Sony, I think Sony can transform these guys into a 90% Studio....

Having said that, there are other studios I would prefer Sony to purchase over Remedy...…..Personally, I'd love for them to go after Konami, Capcom, KojiPRO and Insomniac, but if they're going after smaller studios I think Bluepoint, Remedy, RAD etc are fair game.....
 

Mass Shift

Member
Hey Mass, I have these games on PC and I agree that they are not the best games ever, but they are pretty decent...I personally like TPS games......We must remember though that Remedy also did Max Payne 1+2.....So in the right hands and under good directorship from Sony, I think Sony can transform these guys into a 90% Studio....

They would have to in order to fit in with the studios at Sony. QB and AW had ridiculously long development cycles for what we ended up receiving from Remedy. 6 years for QB, 5 years for AW .

All that time for what amounted to 7-8hr player campaigns. Expansion helped AW to a more complete experience. And environments that were no where near as engaging as they were beautifully rendered.

MS has been on a studio buying binge, but they never once considered Remedy. I hope Sony recognizes that.
 
Top Bottom