We use 4xMSAA and the LWRP pipeline.
I was thinking that we should use all 4 MSAA samples to effectively increase the depth resolution and the quality of the gradient calculations and edge detection.
Has anyone tried this?
Does anyone know if the MSAA sample patterns are generally consistent across mobile devices?
4x MSAA is generally going to be a 4 rooks pattern.
The problem is there’s no guarantee what orientation it’ll be in. The above pattern’s orientation matches that of Direct3D 11’s reference pattern, but even within Microsoft’s own documentation they show it both that and its mirror. OpenGL has no such recommendation, though I would expect most implementations to follow it … though note, that’s a Direct3D render texture orientation, so the same pattern in OpenGL might be inverted with the 0 on the bottom (since the UV orientation for OpenGL starts from the bottom).
Here is some info:
https://github.com/gpuweb/gpuweb/issues/108
Depth is fully supersampled, it seems like a waste to lose any of that.
The default handling for LWRP is to use the MAX of all samples. Discarding a lot of potentially useful data.
Especially for effects like edge detection where subpixel accuracy helps a lot.
I don’t believe that’s true. If you’re talking about the depth texture used for post processing, that’s generated by rendering the view to a non-MSAA target before the main scene rendering occurs. So more accurately you’re getting the depth at the center of each pixel rather the max depth of the sub sample locations. The depth texture gets used for the main directional light cascaded shadows, which are also rendered before the main scene rendering happens so that when it does render the scene it only needs to sample from a screen space shadow texture rather than from the cascaded shadow maps.
I am sorry, I modified LWRP to use the rendered depth rather than adding a pass which is not ideal on tiled platforms such as mobile where a depth pre-pass is not necessary for HSR.
We trigger the CopyDepth pass once opaques are done rendering, which does the Max function mentioned.
Coming back to this, it’d certainly be possible to use the full samples, though as mentioned above there’s no guarantee the pattern will be the same on all platforms. Especially on mobile. As an aside, using the max depth from the samples will also have an impact on the calculated normals and potentially cause some odd biases in the normal already vs. the old method.
However, you might try implementing something like this to validate the current pattern on device, and potentially even set the proper offsets.
Thanks for the link!
Yes, using max depth seems like a bad idea.