So I have a particle system that I would like to render in a special way to distort the screen slightly, but I would like to do this in screen space. I originally set my custom shader’s render queue to “Overlay” (4000), but that shader on the material of the particle system doesn’t “know” about the pixels on the screen. Is there a way to “re-route” the rendering of that particle system to a temporary render texture, where I can accumulate all of the data?
Cause from there, I would be able to use a CommandBuffer/ImageEffect with Graphics.Blit with another shader that can properly distort the rendered screen based on that temporary render texture. But I don’t know how to tell the shader to render to a render texture instead.
Any help would be greatly appreciated
Yess! Thank you, I’ll definitely take a look at that I’m glad it’s free too
EDIT: Okay so this is basically what I found out. It’s kinda hard to explain so I hope my explanation makes sense.
You can make a new layer in Unity. For my case, I called the layer “Screen Distortion”, but it can be called anything you’d like. Then I made my camera render everything except that new layer. I make a second camera (through programming in C#, I do it on Awake) that’s positioned exactly where my primary camera is.
This secondary camera then renders nothing except for the newly created layer, and its RenderTexture target is set (through code as well) to a temporary RenderTexture during OnRenderImage. But since I want it to render a very specific time in the render process, I deactivate the secondary camera’s Camera component (so it stops automatically rendering), and I call the Render() method on it exactly when I need it to render into the texture during OnRenderImage.
The secondary camera renders whatever it happens to see that’s on my “Screen Distortion” layer – any kind of mesh including particles – into my temporary RenderTexture. I can control how the particles are rendered by custom shaders on the materials on their particle systems. For my case, I print out their normals in the camera’s view space (and I made another variation that can use normal maps as well, which works better for particles which are usually flat planes). I use the camera’s view space because the direction of the normals will be relative to the camera, and when I apply the distortion later down the line, I use the normals’ directions in the xy-components to distort the screen along that direction.
After the secondary camera renders into the RenderTexture, I have a shader that applies the distortion based on that texture containing all the accumulated normals. I don’t really need the normals’ z directions since the screen is only 2D. Then I use Graphics.Blit() with that shader to apply the distortion to the screen, and this all happens during the OnRenderImage method.