Demo
Interview at CES 2020
Anyone have any more details on this technology, does it deliver what it promises Realtime Raytracing on Mobile hardware?
Demo
Interview at CES 2020
Anyone have any more details on this technology, does it deliver what it promises Realtime Raytracing on Mobile hardware?
That would be pretty impressive if running on phone hardware . The website seems to list demos running on pc or ms surface hardware:
A bit more info on Toms hardware Adshir: Using Ray Tracing For Better AR Quality | Tom's Hardware
And it looks like the beta SDK uses Unity!
That look a bit phony, it only does reflection not diffuse, and there is no reflection in the reflected model on mirror, there is interreflection with different part of the body (hand to chest) but I don’t detect them in the same part. That doesn’t really look useful imho, nor impressive.
It looks like it’s still doing some diffuse, they’ve just got the materials pegged to be super shiny to “show off”.
Nvidia’s RTX demos also don’t do reflections in reflections in any real game. Only the Atomic Heart demo seems to be doing that (at paying the cost). Battlefield, Control, Wolfenstein Youngbloods, Modern Warfare, they all do either no reflections or fallback to old school cubemap reflections for reflections of reflections. It’s super expensive, and rarely noticeable in real world cases.
The biggest thing that stuck out to me was the reflections are all perfectly shiny. It’s easier to show off raytracing with that, but I suspect it either can’t, or it’s too expensive to do rough reflections.
https://www.youtube.com/watch?v=LXnM750-u1I
They also have this demo would this class as a rough reflection or is it just noisy (might not be realtime)?
Yeah, that video doesn’t have much information. Checked out a couple other videos / papers on it and most of the examples are using perfect reflections. That dino example is on an i7 & 870m and is taking 22~60ms per frame, but it’s also only doing the most basic lambert & blinn-phong shading and reflecting a single object. It’s impressive, but from my past experience with doing ray tracing its the addition of features that slows things down. If you keep your feature set very constrained it’s a lot easier (aka faster).
There’s also a lot of talk about “proprietary dynamic data structures” and “getting rid of expensive traversal costs” and “being a software solution”. Apart from the proprietary part, that mostly describes the BVH everyone else uses. The most interesting part is skeletal meshes being “free” means they’re doing something else, so that might really be the secret sauce … or it could just be bull. Hard to know.
It’s hard to tell exactly what they’re doing for things like soft shadows and rough reflections too. Soft shadows in particular look like they’re standard PCF shadows and show similar artifacts. Whenever they show rough reflections they’re always using a fixed roughness and no fresnel.
Basically, lots of “look how cool this is” tech demos that might be way more limited than it first appears. Or it could just be bad art.
i did found explanatory description about their technology right in the patent paper which introduces substitution for acceleration structures which is called em…forgot how its called but they did finded something interesting ofcourse. hope soon be released and useful
You could have link their patent paper lol
I have been thinking about this, given a low density surface area (like a mobile game), we could have convex decomposition of the space, which mean we could only track complex ray outside on the “boundary” surfaces that bridge two convex spaces. Assuming there is way to quickly discriminate “opaque” boundary given a ray direction (ie fast angular discrimination). It would effectively make rays “local” as in we check fast surfaces that belong to our local convex volume, and pay a bit of cost outside.
Or simply it’s a voxel/box hash that just check surfaces list inside the current voxel/box.