I am using EdgeDetection on my camera and recently converted my rendering system to use Graphics.DrawMeshInstanced, however my meshes do not receive the EdgeDetection image effect anymore. Any hints as to why? Any ideas how I can make these two systems work together?
Just discovered that when I use the Deferred Rendering Path, that the Edge Detection filter is applied, however I am using an Orthographic Camera, and thus cannot easily use the Deferred Rendering Path. I am now looking into using a Deferred Rendering Path in the Perspective Camera setting, but by applying a custom Orthographic Projection Matrix.
EDIT 2: Using Deferred Rendering with an Orthographic Projection Applied via Script (while the camera remains “perspective” in the editor) is not a feasible solution.
Is there a reason that the Standard Assets Edge Detection does not work with Graphics.DrawMeshInstanced using Forward Rendering?
EDIT 3: The “Luminance Threshold Setting” works, but looks much worse than Edge Detection using Triangle Depth Normals
EDIT 4: Determined that _CameraDepthNormalsTexture does not get written to while using DrawMeshInstanced in Forward Rendering mode, but _CameraDepthTexture does. I raised a bug with a repro project.
EDIT 5: The Unity QA Team responded to my bug report and are going to work on fixing this!
EDIT 6: They told me to use the CommandBuffer.DrawMeshInstanced version. Haven’t tested if this works. Unity is likely not fixing this due to the impending release of ScriptableRenderLoops, where everything should be better and working!