Hello
I’m working on implementing some raymarching into a project. For now; I’m rendering my raymarching image into a Render Texture which I blend afterwards with my main camera. I’d like my texture to be antialiased. I don’t know much about AA principles but the standards AA (in graphics settings or post processing) doesn’t seem to work, which makes sense to me since the Render Texture is treated like an image without any geometry information.
I also tried to implement AA into my raymarching algorithm by sampling several raymarch loop by pixels on edges but it’s to heavy (and we are in VR…).
Is there a way to extract some depth or normal from my shader (which I have from the raymarching anyway) and to use it to compute some relatively cheap AA ?
I have also seen that render texture mention AA and depth but I don’t really get their purpose here.
Thank’s in advance
So, I found the way to write several rendertextures from one shader computation using Graphics.SetRenderTarget() with a RenderBuffer array
So now, the question would just be : has anyone any idea about a cheap AA algorithm based on normal and/or depth ? Or would FXAA always be cheaper ?
Thank you !
FXAA is always cheaper, but also not great for VR purposes as it only handles spacial aliasing and not temporal, the later of which is far more apparent in VR than normal.
Below is an example of how FXAA fails on moving edges.
https://twitter.com/i/status/842887900177997824
What might be better is to render at a lower resolution, but with in-shader AA (doing multiple rays per pixel) and then use a depth aware technique to composite back into the scene.
https://assetstore.unity.com/packages/tools/particles-effects/off-screen-particles-46208
https://github.com/slipster216/OffScreenParticleRendering
Or the technique described here on page 61 on:
http://advances.realtimerendering.com/destiny/i3d_2015/I3D_Tatarchuk_keynote_2015_for_web.pdf
A render texture’s AA, as well as the AA enabled on the camera and graphics settings, are for MSAA. MSAA only applies to rasterized geometry edges and not interiors. For a raymarched setup this won’t be terribly useful since you’re presumably just rendering a quad. The depth settings also setup a depth buffer which is useful for sorting opaque objects or rejecting pixels if they’re going to be rendered behind polygons which has already been rendered … which again if you’re just rendering a quad isn’t useful.
Thank you for your answer :). I also found another thread with a similar problem where you were gaving a similar and very informative answer.
Actually, I’m already rendering my raymarching at a lower definition. I might retry multiple rays AA, but that was very demanding on the first try.
About the depth aware technique to recomposite the image, from what I understand, it uses the difference between the fullres and a downsampled depthmaps. The thing is that computing a high res depth for my raymarching implies computing the core raymarch algorithm. I tried to implement it with upscaled depth and had very little result.
The tech. paper from destiny is also very informative… I tried to set-up the VDM but I’m not sure with the rest of the algorithm. Same here : from what I get, they blend low res particule based on the high res depth of the scene, which is different from my situation.
In any case, thank you much, I’m starting to understand more clearly what solving my problem implies.
That destiny paper goes into how they implemented VDM, and how it didn’t work in the real world use cases, then describe a simpler solution that does work and which they shipped with.
That solution is to create two down sampled depth textures, one which has the max depth, the other that has the min depth. Then they stored the color value for both depths and lerp between them based on the full res scene depth when doing the composite. This works well for transparent mediums, like raymarched fog.
The off screen particles asset uses a different approach for compositing, which is to create a single down sampled depth, render your effect against that low res depth. The composite step takes the depth of the full scene depth and tests the 9 closest depths in the down sampled depth, then uses the color from that texel of the low res render texture. This is a lot cheaper than the method destiny used, but works ls best for fairly blurry effects with out sharp details.
Doing a composite between a low resolution buffer with sharp details and the full resolution depth is going to be a difficult task no matter what you do. Either of the destiny style approaches may work better than the off screen particles approach in that case.
Another method could be to just use the interpolated low res depth of the raymarched shape, unless there’s a depth discontinuity in the depth, then use a point sampled depth, or maybe construct an edge from the 4 depth texels.
Really though these days the defacto method for handling anti-aliasing, or perhaps more accurately super sampling, for raymarching is to use a temporal reprojection of some kind. The short version is you reconstruct a full resolution buffer from multiple frames rather than anti-aliasing the low resolution buffer. This works great for objects that are opaque to semi-opaque and don’t animate quickly. Many cloud rendering solutions use this in recent games, but the idea is pretty old. There are some papers on using it for “interactive previews” of raytraced scenes so you can move the camera after or mid rendering something in an offline tool without needing to restart rendering from scratch, and even some demos showing it off working in real time on voxel octrees:
The demo was posted on that site as well, but the link doesn’t work anymore, but it’s still archived here:
1 Like