• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Euclideon "Unlimited Detail" Tech Megathread - Strip Down the Hyperbole

Ever since the most recent news, I've been thinking this thread might be necessary, to, firstly, dispel some of the hyperbole and misconceptions surrounding this whole thing, to straighten up the facts somewhat, and secondly, collate news and information about Unlimited Detail into one place. If some mod feels that's unnecessary, feel free to lock, I suppose. I'm gonna keep most of the videos and news updates to the bottom of the OP, unless I need certain links to demonstrate certain things. I hope this thread is at least somewhat informative.

Logo.png


Euclideon is a graphics middleware company based out of Brisbane, Australia, founded by Bruce Dell, specializing in point cloud/voxel data. And also infamous for overhyping the capabilities of their main project, "Unlimited Detail", right down to the name. They are terrible at properly selling the idea, unfortunately, because, provided that it works properly and is actually game-ready, is potentially groundbreaking. So I'll try and 'sell' it for them, in a much less hyperbolic manner.
You're welcome, you wankers. :p

Disclaimer: Not a graphics expert or engineer or anything of the sort, just a university student studying games design, so my explanation may possibly be inaccurate on the technical side. You're better off asking someone else for a more technical explanation if you want one. I'm pretty sure all this is about right, but this is just to be on the safe side.

Banner1.png


It is, according to its developers, a highly sophisticated voxel/point cloud rendering system that can render voxel environments far, far more efficiently than polygon engines.

Here's the basics of what Unlimited Detail is in visual form:

ground_unlimited.jpg.jpg
unlimited-detail-dirt.jpg
18j1pg9t616ppjpg.jpg


Compared with a relatively recent game screenshot:

ground_polygon.jpg.jpg


What you're seeing in the lower image is slightly outdated, but the improvements to environment geometry aren't really substantial in more recent titles. Take note of the ground - nearly everything is flat, with the occasional small detail object, and the grass are literally billboards, flat textured objects designed to create the illusion of grass without adding much to the polygon count. Even the player model and weapon has some polygon limits if you know where to look, though much less obvious. By contrast, the upper screenshots have thousands upon thousands of highly detailed objects, from the grass, to the individual bits of dirt in the ground. Composition and colour palette aside, the actual detail far obviously exceeds the lower screenshot. This exact demo has been demonstrated on-camera in playable form.

How does it work? "Unlimited Detail", as far as we know (putting aside the speculation of the exact details of how the voxels are structured), utilizes what Bruce Dell calls an efficient 'search engine' that, with every frame, acquires every visible voxel on-screen that can be assigned to a pixel on the screen, ignores the rest, and constructs the image accordingly. It is also claimed and shown that this technology can be run at 30FPS in software on a pre-2012 non-gaming laptop. This is incredibly efficient, and, provided that it works as previously stated, capable of incredible resolutions and an extreme amount of detail with very little processing required, relatively speaking by today's standards. It is also stated and shown that both UD voxels and polygons can coexist in the same scene.

Depending on your view, the name is slightly disingenuous - while there are still theoretical hardware and storage limits (mainly RAM and storage data), and the amount displayed is limited by the monitor resolution, the amount of voxel data in a scene at one time has literally no bearing on how efficient the actual renderer is, in theory, the renderer will always collect and display only the voxels that can possibly be visible on-screen each frame, no more, no less. Once again, we lack the actual knowledge of the limits of the tech, but according to interviews with geospacial data companies who have actually used the Geoverse offshoot, they have tried and failed to make the engine fail or even falter.

Speaking of which, Euclideon released a variant of the technology, called Geoverse, as a data viewer for geospacial LiDAR data - in other words, entire environments scanned in 3D.

Also speaking of which, one of the main features of the technology is the ability for developers to scan in objects from the real world (quality is dependent on the scanning tech, keep in mind, a focused scan on a single object is going to look a lot better than a total environment scan) without having to degrade the end result to fit a traditional game engine. It has also been stated that there are pipelines for traditional modelling workflows, enabling the use of extremely high-quality models such as the stuff you see made in ZBrush. Scenes can be constructed like any regular game world, and entire environments can be scanned in from the real world, albeit in debatable quality due to the limits of LiDAR scanning.

In conclusion: Is the technology that amazingly revolutionary? Depending on how you view it, no, it really isn't, far from it, voxel tech has been around for a long time in various forms. However, much like what Oculus is doing for VR, "Unlimited Detail" could very well potentially make pure voxel rendering practical for high-detail game worlds. If it works well enough, it could also potentially render polygon-based engines effectively obsolete. Potentially. The jury is still very much out on that.

Fun fact: Bruce Dell began developing the technology when he accidentally mistook Donkey Kong Country's pre-rendered CGI graphics for real-time visuals - basically, his reaction was "challenge accepted". He even started on his trusty Amiga. It was, to my understanding, mainly a hobby project until he actually was able to form a company and start developing it more seriously.

Banner4.png


Aside from what was noted above, there are other important aspects that are still missing from the picture. The first three of these have been previously noted as the most important factors by Euclideon in terms of making the technology 'game-ready'. (If I've missed something, let me know.)


  • Animation: The above-linked 2011 interview with Bruce Dell includes rudimentary animation from previous iterations of the technology. There's also an animation test video from about four years back. However, we have yet to see proper animation from the current iterations of the tech.
  • Dynamic Lighting: Actually demonstrated in a mini-demo at Gamescom 2011, also demonstrating how UD voxels can coexist with polygons.
  • Collisions and Physics: Unknown. However, theoretically, it could easily be achieved the 'traditional' way, though one can imagine the aim is to create a system that works better for voxel-based environments.
  • Data storage and RAM: Also somewhat unknown, though Bruce Dell has stated that it is also a major focus. Considering Geoverse was also designed to greatly optimize LiDAR data, who knows what the UD format might be like.
  • Water: The 2011 demo doesn't actually have water, that's just an illusion caused by the entire scene being mirrored and tinted (which is still a big deal, in other engines, that trick would be considered completely insane and wasteful). That being said, it's not difficult to conceive water rendering in UD anyway, especially if they get animation working in general.

Until we know that these factors have been adequately addressed, we don't know if the technology is ready for gaming. However, in theory, what we know so far can still be used with static environments mixed with polygon objects. But, once again, it's "wait and see" at this point.

Banner3.png


Q: Isn't this a scam/hoax?

A: At this point? No, not really. Euclideon have put out an actual product (Geoverse) that clients are using, and responding positively to. Now, is the tech actually 'game-ready' and if they'll release such a game-ready middleware at all? The jury is still out on that one, but Euclideon are reportedly developing two game projects to demonstrate the technology.

Q: But Notch said it was! And John Carmack said it's way too early for such tech to work!

A: Even experts can be wrong (and it can be argued that Notch isn't really an 'expert'). Considering the hyperbolic sales pitch and the 'too good to be true' saying, I don't really blame them, though. I'm not going to get into specifics here, however, and it's too early to call a verdict.

Q: But Euclideon are lying and over-exaggerating how awesome "Unlimited Detail" is!

A: Ignore the sales pitch if you can help it, as previously said, their sales pitches for the tech are cringeworthy a lot of the time. Not so much 'lying' but rather just overhyping and hyperbole. Strip away all that and focus on the facts, and there's still a lot to like.

Q: I've seen the videos with the real-world environment scans, if it's supposedly 'unlimited', why does it look kinda awful?

A: As previously stated, it's to do with the data, not the renderer. The scans are the end result of moving scanners that are trying to collect as much data as possible in an environment, and as such, is only capable of collecting so much data. Such scanners also usually remove or mess up moving objects, and especially small details - you can see how this affects plants and other objects affected by wind. In the 2011 demo, the laser-scanned objects look much better and cleaner due to focused, controlled scanning, and there is much more precise scanning tech out there these days. Even if the data has flaws, the sheer amount of geometry still eclipses modern engines.

Q: I've heard from somewhere that it's cloud-based.

A: No, it isn't. That's just one way Geoverse can load in data from external storage.

Q: The 2011 demo looks awfully repetitive...

A: That's because it was cobbled together in a few weeks ahead of time for Gamescom. The repetitiveness is still irrelevant anyway, it's still rendering a ton of detail in the actual scene. Oh, and the 'water' is actually the entire scene mirrored and recoloured, meaning that's double the potential geometry to render.

Q: But it's still too good to be true!

A: Perhaps. While it's sensible to keep one's expectations in check, sometimes new thinking can overturn common wisdom. Keep the hype low, and wait until we have something playable to judge. And even then, it might improve over time, so who knows?

Q: You're only giving them the benefit of the doubt because you're Australian!

A: What? No. That's silly.

Q: Okay, let's assume that it does work as you've described. What kind of stuff would it enable, aside from catapulting real-time 3D graphics at least a couple of console gens?

A: Well, I suppose I can make a list... (Note: pretty much speculation on my part.)


  • No need for flat geometry. You can probably even convert bump mapping to actual, proper geometry. And, for that matter, no more visible polygon vertexes.
  • Proper grass and flora that isn't a bunch of flat textures that, depending on the game, may or may not constantly face towards the camera.
  • New potential avenues of content creation, and potentially cheaper. Real-world objects can be scanned with no need to compromise on quality.
  • Models don't need to be remade for more advanced hardware if they're of high enough quality. This is especially true of 'realistic' models.
  • Level-of-detail (LOD) models are essentially obsolete.
  • We could basically have games on the level of CGI. Actually, thinking about it, it could also enable real-time non-interactive animations, if mainly for the novelty of watching them in 60 FPS.
  • Oculus Rift-compatible UD games would be much, much easier to optimize for 120 FPS.
  • 4K, 8K and 3D-view games without the need for high-end PCs.
  • The efficient rendering would leave a lot of room for other processing-intensive features, such as lighting, physics, and fluid dynamics.
  • Destructible geometry with everything is much, much much easier, and much more dynamic.
  • And other stuff I'm probably forgetting.

Banner2.png


1994: Bruce Dell starts working on the tech after mistaking Donkey Kong Country's pre-rendered CGI, using his Amiga. The result is colourful, unrealistic and a bit crude/primitive, but still shockingly detailed real-time 3D for the time.
2010: Dell founds Euclideon as a company, and releases a few videos on Youtube showing a much earlier version of the tech with no colours, including a point cloud animation test. Disappears into the woodwork for about a year.

2011:
2012: They spend some time at the tail end of 2012 mostly teasing Geoverse. Oh, and a video of a random laser scan test.

2013:
2014:


Comment on the latest news: I have no plans on making any predictions on when and what these guys are going to pull something out next, I've tried and failed before. But I have to wonder what kind of game projects they're working on. I know Dell has previously stated that they were working on another playable demonstration with the help of ex-THQ artists, perhaps one of those projects is what they were working on. Who knows. But apparently they're fully focused on gaming content, now that they've finished Geoverse.

I'm personally cautiously optimistic, hoping for the best, bracing for the worst. I really want to see it happen and working, but I'm also worried about being disappointed. Oh, well, that's what happens when you're trying not to get too excited about a seemingly far-fetched but amazing-looking project.
 
Years ago something like this popped up in Edge a few times, for some Russian dictator sim, or other I forget the name. It said it was so infinitely detailed you could zoom into any building to a flower pot, then the flower, then the petals and the stamen.

It sounded like a load of rubbish back then.
 

FirewalkR

Member
Years ago something like this popped up in Edge a few times, for some Russian dictator sim, or other I forget the name. It said it was so infinitely detailed you could zoom into any building to a flower pot, then the flower, then the petals and the stamen.

It sounded like a load of rubbish back then.

Republic: The Revolution by Demis Hassabis's (ex-Bullfrog/Lionhead) Elixir Studios.
 
I have a feeling this is going to look really bad when coupled with physics and animation, physics might not even be possible.

I'd love to be wrong though, I can see this tech being amazing for horror games though, get a crew to go scan an abandoned sanatorium or something.
 

Teremap

Banned
Yeah, the animation and physics problem is the real problem that this tech must contend with.

I should note that this really highly resembles real-time path-tracing engines like Brigade, only without the complex lighting, and Brigade already has animation and physics working in-engine. From that perspective, it's really not that impressive.
 

bobbytkc

ADD New Gen Gamer
I have a feeling this is going to look really bad when coupled with physics and animation, physics might not even be possible.

I'd love to be wrong though, I can see this tech being amazing for horror games though, get a crew to go scan an abandoned sanatorium or something.

How about just using the tech for static things like the floor or buildings? Is that possible? What are the drawbacks?
 
I have a feeling this is going to look really bad when coupled with physics and animation, physics might not even be possible.

I'd love to be wrong though, I can see this tech being amazing for horror games though, get a crew to go scan an abandoned sanatorium or something.

Animation is obviously a major problem, but physics could easily be done the 'traditional' way if needed, with using simplistic polygonal 'collision models'. Most polygon physics objects have far more simplistic collision models, and player collisions are often just basic 3D shapes.
 

Lafazar

Member
I feel it's completely pointless to talk about this technology for games at this point. So far they have not shown any of the things listed in the OP that would be necessary to make this technology suitable for games (and I personally doubt they ever will).

Once they do we actually have something to talk about, but at this point it is a complete waste of time.
 

Katori

Member
I have a feeling this is going to look really bad when coupled with physics and animation, physics might not even be possible.
I have an idea of how physics could be done, though maybe I should sell it to Euclidion instead of posting it on GAF (j/k). But it would really be quite simple.
 
I definitely think there's room for new rendering techniques, we kind of got stuck on polygons. Obviously there are challenges they have to tackle, but the more widespread the technology gets, the more people can bang their heads against it until they come up with solutions. (See VR development in the past two years.)

I think that's one of the biggest problems, Euclideon, even with a released product, likes to keep a lot of what they're promising tight to their chest. That makes people question whether or not they've got a legitimately promising direction or not.
 

low-G

Member
They tackled a 'problem' by finding an ideal solution to ONE thing while ignoring all other problems / practicality. Like making a perfect hair renderer that requires 95% of the GPU power. Pointless.

Games probably WILL use tech like this in several decades, along with raytracing and amazing particle physics.
 
I should note that this really highly resembles real-time path-tracing engines like Brigade, only without the complex lighting, and Brigade already has animation and physics working in-engine. From that perspective, it's really not that impressive.

To be fair, Brigade is also running on two Titans and the 3.0 trailer has an absurd amount of image noise, while UD could practically run on a toaster, relatively speaking, not to mention Brigade still use polygons, so I don't really think that comparison really works.
 

low-G

Member
I should note that this really highly resembles real-time path-tracing engines like Brigade, only without the complex lighting, and Brigade already has animation and physics working in-engine. From that perspective, it's really not that impressive.

This is doing a completely different set of things than Brigade, but taking a similar technique of being an impractical yet ideal solution to ONE graphics problem.

Combine this with brigade smoothly and you'll be well on your way to astoundingly real graphics. Also, probably have to make a lot of mathematical theory breakthroughs along the way.
 

Teremap

Banned
To be fair, Brigade is also running on two Titans and the 3.0 trailer has an absurd amount of image noise, while UD could practically run on a toaster, relatively speaking, not to mention Brigade still use polygons, so I don't really think that comparison really works.
It works insofar that if you dropped the many, many lighting bounces and replaced the polygons with voxels, Brigade would perform on a similar level (as it is an INSANELY fast renderer when you consider how complex the lighting is). Brigade's performance is also irrespective of the complexity of the scene, much the same way Unlimited Detail is purported to work.
This is doing a completely different set of things than Brigade, but taking a similar technique of being an impractical yet ideal solution to ONE graphics problem.

Combine this with brigade smoothly and you'll be well on your way to astoundingly real graphics. Also, probably have to make a lot of mathematical theory breakthroughs along the way.
Quite frankly, until UD produces working game footage like Brigade, I'm heavily inclined to think of their route as a technological dead-end.

Real-time pathtracing is merely a matter of power. I imagine once we drop down to, say, 14nm GPUs with stacked DRAM, path-tracing will be a practical technology whereas UD is just a nice thing to think about.
 

Damaniel

Banned
They tackled a 'problem' by finding an ideal solution to ONE thing while ignoring all other problems / practicality. Like making a perfect hair renderer that requires 95% of the GPU power. Pointless.

Games probably WILL use tech like this in several decades, along with raytracing and amazing particle physics.

It's hardly pointless to do the work necessary to prove the technology, even if the result ends up being highly impractical for general use. I suspect that there are huge hurdles to making an engine like this game-ready (and most of them have already been mentioned, but physics and animation are the big ones), but we can't make progress if people choose not to do something just because said something is perceived to be of little value or practicality.

It's obvious that the company has *something*, but I'm not passing judgment until we see an actual game (or at least some kind of game-like demo) confirmed to be running using their tech. And they should also cut back on the hyperbole; people won't take their company seriously if they promise the world (even if they can deliver 95% of it in the final product).
 

Chabbles

Member
Haven't heard about this in awhile... It would of been cool it this tech was up to speed and more advanced at this point, could be a great marriage with VR.
 
Are there anyother psuedo science threads on GAF?

I kid I kid

Still looking for any actual practical game engine application for voxels in the way they use them. It needs quite a bit more, IMO, before it can even approachably used for moving tech demos.

ground_polygon.jpg.jpg


Also this screen is kinda like totally BS as a counter point.
 
Yes, let's stop the hyperbole. Here are responses straight from them on what we gamers care about. Their answers are either vague or flat out dissapointing. Most people in this thread see the truth of the matter.

http://techreport.com/review/27103/euclideon-preps-voxel-rendering-tech-for-use-in-games

Dynamic Lighting
Euclideon's technology does support dynamic lighting, and Dell claimed the results are better than those from polygon-based games. However, he added that he prefers to preserve original photographic lighting from real-world scans whenever possible. Real-world lighting is "so much higher [quality] than what computers can generate." The same goes for pre-baked lighting from offline renderers. "We've been . . . setting [3ds Max] and Maya to really high lighting settings and then running that through our converter to turn it back into XYZ voxels," Dell noted. The decision to forego dynamic lighting apparently isn't tied to performance constraints, though. "If we had any performance limitations," Dell told me, "we'd probably go to something like CUDA and actually start using the graphics card."

Status of Animation
Both of the upcoming Euclideon-powered games will feature "directly imported graphics from the real world," and they'll be entirely voxel-based, with no polygons even for animated models. Dell told us that animating voxels is "not the hardest thing in the world," but Euclideon's implementation is only about 80% done. That's why we haven't seen it demoed yet. "If I were to put that up today, I think people would look at the 20% that was missing," Dell explained.

AntiAlaising
What about antialiasing? Euclideon has been "experimenting" with a new AA technique, but that technique is being kept under wraps because of "patent issues." All Dell would say is that the "one voxel per pixel" formula doesn't mean pixels have to fall "exactly on the pixel grid," and the AA scheme may "make some decisions about where it does want to grab just a few voxels extra in an extremely efficient way and blend them together." Dell also suggested that antialiasing will play a part in improving image quality on lower-end systems that may render scenes at lower resolutions. In any case, high-res screenshots from Euclideon's latest demo still show hard, jagged edges between objects in some places. (See the gallery at the bottom of this page.)
 

Antialias

Member
This is a running gag where I work.

Not so much the tech, which may someday have its place, but just the ridiculous bullshit claims the guy makes in practically every sentence.
 

eot

Banned
If these guys had anything worthwhile you'd see them have some partnerships / customers by now, but years later they're still just posting poorly considered videos on youtube. Ok the faq states they have clients, but they don't show a single supposed client application of their tech (that I saw anyway).
 
It works insofar that if you dropped the many, many lighting bounces and replaced the polygons with voxels, Brigade would perform on a similar level (as it is an INSANELY fast renderer when you consider how complex the lighting is). Brigade's performance is also irrespective of the complexity of the scene, much the same way Unlimited Detail is purported to work.
Quite frankly, until UD produces working game footage like Brigade, I'm heavily inclined to think of their route as a technological dead-end.

Real-time pathtracing is merely a matter of power. I imagine once we drop down to, say, 14nm GPUs with stacked DRAM, path-tracing will be a practical technology whereas UD is just a nice thing to think about.

I have to agree. Brigade is far more impressive. I have been following them for a long time. Once Nvidia, AMD , and Intel start making hardware built for path tracing, things will take off for OTOY.

Also, for those who don't know, Voxel Farm (http://voxelfarm.com/vfweb/index.html) is being used in Everquest Next. People are already playing Landmark. This solution converts voxels to polygons at runtime. Ultimately, I think hybrid engines are the way to go.

Also, there are games that are using scanned environments from Photogrammetry and Laser Scanning. They are just converted into polygons. The games are The Vanishing of Ethan Carter (http://ethancartergame.com/) and Get Even (http://www.thefarm51.com/index.php?module=projects&menu=6&id=3).
 
We use voxel based models to import high accuracy human body (organs, tissue, bone etc) into the CAD tool I support/sell for a living.

Like this, they offer very, very high resolution, high fidelity models, but as soon as you try to make any changes (transforming location, scale, little alone actual CAD operations that would modify the 'geometry' itself) you get into a terribly challenging problem for implementation. Even on high end hardware, getting a consistent, static, one time transform working reliably on dense voxel objects is EXPENSIVE - you start trying to tackle that in real time for games, and wow, no thanks.

This tech could be highly useful for 'static' applications like virtual museum tours, or even 'Myst' style still life games, but I am highly doubtful it will ever see wide spread deployment for more common games.
 
I'd love to see a walkaround demo of this with the Oculus Rift. I doubt this'll be useful for actual games, but for making interactive little experiences, I can see its use.
 

Horp

Member
Op says:
Disclaimer: Not a graphics expert or engineer or anything of the sort, just a university student studying games design, so my explanation may possibly be inaccurate on the technical side

Why then make a tech thread about this? You realize that there are actually are a bunch of actual graphics experts and engineers on this board. Why not keep discussing this in the threads that already exist?

If someone make a "strip down the hyperbole"-thread, one would assume that it would be made by an expert on the matter, that is actually capable of doing just that.

And no, you can't just cite their own answers on the details on their own engine, since they are hyperbolic themselves, and have left a bunch of critical questions unanswered for 3 years.
 

Durante

Member
Until we see large-scale animation it's pretty useless for games. And animation is the hard part.

Maybe it's good for walking simulators.

Destructible geometry with everything is much, much much easier, and much more dynamic.
That's just totally wrong. Unless by "destruction" you simply mean "removing voxels". What people think of when they think destruction is something like BF3/4, with entire parts of the levels moving around.

Good luck recomputing your rendering-optimized voxel data structures each frame!
 
At this point, all this is going to boil down to 'wait and see'. All I can really do is state the details as I see them, and try to scale things down as closely to fact as possible, and keep the speculation separate from that.

Yes, let's stop the hyperbole. Here are responses straight from them on what we gamers care about. Their answers are either vague or flat out dissapointing. Most people in this thread see the truth of the matter.

http://techreport.com/review/27103/euclideon-preps-voxel-rendering-tech-for-use-in-games

Dynamic Lighting


Status of Animation


AntiAlaising

All I see for the first one is 'we're already implemented it, we just prefer to do it a different way for our own projects', and the second two are both 'we're still working on it, but it's coming along fine'. What about that is vague or disappointing?

If these guys had anything worthwhile you'd see them have some partnerships / customers by now, but years later they're still just posting poorly considered videos on youtube. Ok the faq states they have clients, but they don't show a single supposed client application of their tech (that I saw anyway).
The aforementioned clients are geospacial companies, and Geoverse is pretty much a real-time data viewer. I imagine there's plenty of uses for visualizing entire real-world environments as 3D models in an accessible and fast manner.
 
At this point, all this is going to boil down to 'wait and see'. All I can really do is state the details as I see them, and try to scale things down as closely to fact as possible, and keep the speculation separate from that.



All I see for the first one is 'we're already implemented it, we just prefer to do it a different way for our own projects', and the second two are both 'we're still working on it, but it's coming along fine'. What about that is vague or disappointing?


The aforementioned clients are geospacial companies, and Geoverse is pretty much a real-time data viewer. I imagine there's plenty of uses for visualizing entire real-world environments as 3D models in an accessible and fast manner.

Just saying that they can support it is not good enough. They have to show it. The fact that they won't even have it in their 2 upcoming games is really dissapointing. When you are making claims like they are, you need to back it up with evidence. I'm not talking about that 3 year old demo with the tire and tree either. They need to wow us like the Unreal Engine 4 Infiltrator demo wowed us. You either put up or shut up. I like how they use outdated game footage in their videos to compare to.
 
Just saying that they can support it is not good enough. They have to show it. The fact that they won't even have it in their 2 upcoming games is really dissapointing. When you are making claims like they are, you need to back it up with evidence. I'm not talking about that 3 year old demo with the tire and tree either. They need to wow us like the Unreal Engine 4 Infiltrator demo wowed us. You either put up or shut up.

I can certainly understand that, but right now, I'm personally more interested in animated voxel models and the sheer amount of detail in playable form - that's the main thing the engine will live or die on, in my opinion. And Dell actually gave a rough release date for their gaming projects - May next year. Meaning that updates are going to be more frequent, I imagine.

Also, thanks for the article, some very interesting information in there. I'm also intrigued by Dell's comments on licensing the engine, he seems to be worried that the cheap subscription model from Epic and Crytek is potentially an unsustainable race to the bottom.
 

westman

Member
Q: The 2011 demo looks awfully repetitive...

A: That's because it was cobbled together in a few weeks ahead of time for Gamescom. The repetitiveness is still irrelevant anyway, it's still rendering a ton of detail in the actual scene. Oh, and the 'water' is actually the entire scene mirrored and recoloured, meaning that's double the potential geometry to render.

I know they said something like that, but I don't buy it. A highly repetitive scene with few unique objects means that caching things to RAM could work well in their demo, yet be totally impractical for a more varied scene, as more accesses would need to hit the slow disk rather than the fast RAM. Their demo sidesteps that issue, and I find it likely that there is a very real trade-off between detail and variety that would kill their whole "infinite detail" sales pitch if presented honestly.
 

isamu

OMFG HOLY MOTHER OF MARY IN HEAVEN I CANT BELIEVE IT WTF WHERE ARE MY SEDATIVES AAAAHHH
Regardless of the controversy of Dell's comments, this tech looks extremely impressive IMHO and the thought of racing games looking this good has me salivating at the mouth.
 
Top Bottom