As I’m stuck with Unity Free for the time being, I’m looking into doing some cool shader effects that would usually be done as post-processing FX right in my mesh shaders.
For example I have a rudimentary tone-mapper, color LUT and custom fog running in the fragment part of my all my game objects’ shaders. Individually none of the effects are terribly complex, but obviously the total instruction count increases quite a bit.
So far it appears to be working well and doesn’t seem to affect my frame rate much at all. I should probably run a stress-test to see how it copes with a higher number of objects than I’m currently testing with, though.
For what it’s worth, I’m not particularly interested in mobile development so I’m focusing on reasonably capable machines only.
Anyway, as this method does strike me as rather unorthodox, I thought I’d check here whether anyone else has gone down this route and has some experiences to share?
I experimented with this a while back and made some posts about it in the Art WIP thread. It is possible to get it to work on mobile devices if you are careful about how many objects use this type of shader. Desktops is where this technique really shined and I have plans to use it in some upcoming projects. Here is an image of some of the results I was able to obtain.
Thanks, that looks very cool and shows that my idea is indeed viable.
Are you applying the color correction in both the “Forward Base” and “Forward Add” passes? I was thinking I could save some processing time by only applying it to the base pass, but haven’t done any conclusive testing so far.
The affect was achieved using a single pass with all lighting disabled and only the baked lightmap for lighting information. The post processing is applied via the same shader with different setting for different objects depending on location in the scene and visual importance. (ie. foreground, midground, and background) It was a bit tricky cramming it all in and getting it under the instruction limit but it is possible. My original intention was to make this for mobile use but it was a bit too much for my iphone 4. When I start work on my desktop game projects I’m planning on reworking it to include several different lighting models and a few more effects.
I’m curious why you two are doing postprocessing in your regular object shaders. Are you using information there that is not available to an actual image effect?
Believe me, if I had access to image effects I’d be all over them. Alas, they’re not supported in Unity Free, which is what I’m stuck with until I can secure funding for my project.
Should be relatively easy to move my “fake” post-processing functions into actual image effects when the time comes.
Thanks, looking into replacement shaders now. Your own replacement shaders pack looks very interesting.
Would you mind sharing how you get the result of previous rendered passes in the replacement shader? At least that’s what I assume must be happening to achieve a decent blur or edge detection in your shaders… I thought this “grabpass”-like functionality wasn’t available in Unity free, but apparently a replacement shader is a way around this limitation?
Or are you just applying the blur over the _MainTex and not the actual rendered result?
Also, does using replacement shaders increment drawcalls for each object or will dynamic batching kick in?
There is no thing called previous pass with replacement shaders, as the name implies it replaces the materials original shader depending on the rendertype tag. Original purpose is to bulk render simple stuff such as depth maps, world positions, normals…etc into a rendertexture(unity depth image is done like this)
If Unity people add a functionality to atleast help us to find what the original shader is, it would be great but its not there at the moment.
So, you only have access to what rendertype the original shader is and the original material properties such as _Color(if any), _MainTex(if any), bumptex(if any), speculars(if any)…etc
Although, this should not stop you to write your very own custom shader framework to overcome such issues without waiting unity people.
In that case replacement shaders probably won’t do much for me, as my tone-mapping and LUT functions rely on processing the results of my custom surface shader after lighting.