Hi
gotta question. So what is the difference between the OnPostRender and OnRenderImage method for cameras, except that OnRenderImage gets the source/destination render textures? Is there even another difference?
Thanks
Chicken
Hi
gotta question. So what is the difference between the OnPostRender and OnRenderImage method for cameras, except that OnRenderImage gets the source/destination render textures? Is there even another difference?
Thanks
Chicken
OnPostRender can be used to undo some camera specific things you set in OnPreRender (see the example on disabling fog per camera in the OnPostRender entry in the script reference). This will prevent the settings to also unintentionally apply to possible other cameras in the scene. These are normally logical operations.
OnRenderImage is used for image effects. They allow you to apply modifications to the image the camera just rendered (source), usually by rendering a quad with an image effect shader into the destination texture. These are graphical operations.
Okay, thanks. In addition to that, is a better idea to render a second camera using replacement shaders in the OnPostRender method? Because right now i’m rendering a second camera in the OnRenderImage method and it’s giving me some really bad problems. For example it is for some reason causing the first cameras rendered image to be different (and i seriously don’t know why) and is destroying the DepthNormals texture if both cameras use the same rendering path. And thats some bad bugs as i really like the effect im getting…
You can’t trigger a camera to render manually. It all depends on the depth order( set in Camera component ). Lower depths render first.
I can’t speak alot for image effects, but you can’t do any rendering in OnPostRender.
Are you using multiple cameras for something like projectors? Or like render passes?
Yes you can.
http://unity3d.com/support/documentation/ScriptReference/Camera.Render.html
As for the OP:
Is it possible for the effect you’re trying to achieve to render both cameras separately, store their render textures and then combine the textures when both are done?
Wow did not know that… Thanks tom
Hi, thanks for the answers so far. I do indeed render the second camera manually in the OnRenderImage call. Well, i guess i have to look for the issue then. Maybe i did something wrong and didn’t find it yet… It’s for a screenspace subsurface scattering effect btw and i’ve got most of it working already, but not everything.
Well at least BDev learned something today so this thread wasn’t completely useless
I have the same requirement,and i have do this test with replacement shader ,it no effect . unity 2018.4.21