Hi, sorry about the title. I’m creating this range effect for my tower and it’s important that the transparent effect covers all geo except the tower itself.
The range effect itself is a large quad with ZTest Always that spreads out over the land with the desired effect. Second, I’ve set up a special camera that allows me to get a RenderTexture of just the tower that I’d like to cut out of the range effect quad. The special camera always aligns with the main camera before rendering.
Third, I pass that RenderTexture to the effect shader that allows me to do a clip in the range effect where there are opaque pixels in the RenderTexture.
This looks correct when the camera isn’t moving. But what I’m seeing is the RenderTexture seems to always be one frame behind. Here’s a picture of what happens when I’m panning the camera quite quickly to the right.
I suspect that the cutout texture in the effect seems to be for the previous frame.
So, I’ve tried ordering the camera render order so the camera rendering the tower goes first (I’ve set its depth to -2, main camera is set to -1), and then on OnPostRenderCallback passing the RenderTarget to the range effect shader immediately hoping to set the RenderTarget in the range effect in time… but, it seems to me that once the game is in the render phase changes to the shader params have no effect until next loop.
So now, my question is how is this really supposed to be done? It’s not clear to me how data is intended to be transferred around between disconnected shaders or draw calls in one frame. Or how masking like this is intended to be realized.
We are using deferred rendering.
I’m looking forward to learning something. Thanks!
I don’t see anything particularly wrong with your setup that would cause the problem. I’m guessing there’s some order of operations issue in your code. However there should be no need to pass the render texture to your shader every frame after rendering to it. As long as the render texture is not assigned as both the render target and being sampled from in a shader being rendered to that render target, it’s fine to assign them immediately.
I’d probably create a temporary render texture and assign it to the camera’s render target and to the range material one line after the other. Or, more honestly, I’d probably use command buffers and skip using the second camera.
Thanks for the response. It’s news to me that I don’t have to pass a render texture repeatedly, that’s pretty cool. I guess I’ll keep toying with it. Can you point me in the direction of a good command buffer tutorial? My knowledge of Unity shaders and shaders in general is highly localized.
Also, the GameObject generating the range effect is a quad that I allow to render with the main scene. It’s just sitting in the scene like a normal object with the special shader. I’m not sure how to use command buffers to render that one object special.
I can’t say I know a “good” tutorial. No. They’re just a tool that can do a lot of things, so any tutorial is only going to scratch the surface of what they can do and how they can be used. Unity’s own documentation linked to some useful example projects. https://docs.unity3d.com/Manual/GraphicsCommandBuffers.html
A lot of really thorough tutorials on them are more focused on the new Scriptable Render Pipeline since a lot of the internals of those are based around heavy command buffer usage. But that’s less helpful when trying to do stuff for the built in renderer.
But honestly it might be easier to use DrawMesh() or even just change the shader to be used with a Blit() instead, though it can be a bit easier to reconstruct the world space position from the depth buffer with the in-scene quad (either on a normal game object or via DrawMesh(), it’s not required.
Here’s the short version of how I’d approach this.
Have a script on a tower that has a list of the renderer components you want to be masked from the range.
Create a command buffer that does these steps:
You only have to create this once, unless you use DrawMesh() since you’ll need to manually setup / assign the object to world transform matrix for it if it changes. After that it’s a matter of adding it to the camera at the appropriate event (probably CameraEvent.BeforeForwardAlpha) when you want it to show, and removing it when you don’t. https://docs.unity3d.com/ScriptReference/Camera.AddCommandBuffer.html