Ouff! I haven’t been active on neither the forum nor this project for a while.
Sorry for the lack of updates guys.
Due to some requirement I couldn’t met with the shader I made in the ways I wanted, I went with a different approach.
The reason why I stopped working on this is due to 2 “issues” I ended up facing :
Issue A : A major change in how the faces are calculated have been implemented since Unity 2018.3x
Whenever there’s new stuff that is awesome, there’s always the backside of the medal and the backside of the medal, this time, is so hard to manage that even 1 year later nobody have found a viable handout “solution” without some major side-made fixes.
To put it in a simple way, with Unity 2018.3x came the initialization of a new way of handling the faces’s in the rendering pipeline of the engine. This was in preparation with the Progressive Lightmapper implementation. As everyone (who follows Unity engine’s updates) knows, that is one really great system that remove a tons of stress and waiting when working in any scenes since the lighting bakes itself pieces by pieces while you can keep working in the scene while the lighting is baking itself. While the Enlighten lighting system has great results, you usually have to turn off the Auto-Bake when building a scene since it keep redoing the baking of the whole scene each time something moves, is removed or added. With complex scenes, it’s can take a long time during which the CPU is clogged and the PC run to a crawl.
Anyway, one thing that changed with the Progressive Lightmapper is how the Unity Engine handle the models’ MATRIX.
The MATRIX is, to put it bluntly, a part of the system around which the model’s normal are calculated. It’s one of the key part of the Toon shaders with outline since they make uses of the UNITY_MATRIX_IT_MV and UNITY_MATRIX_P to displace the surfaces based on the vertex position and it’s related faces’ orientation.
It’s all technical, but the results is astoundingly simple to see.
As you can see, some corners get’s detached. This might sound simple to fix, but it’s not. Originally, the way the MATRIX was handling the group of faces was based on the model’s smoothed groups (also known as Hard Edges for 3D artists) and faces angles weren’t taken into consideration.
So, previously, all the “groups” were handled as a whole and the “shell” effect was pushing the outline outward in a perfect and uniform way. With the change done to the way the MATRIX is stored in the engine, the “new” groups are now handled based on normal direction. This means that the previously Hard Edges are not used any more, but instead the engine decide itself what is “detached” or not based on the angle between each faces when rendered.
From my tests, it’s around 60-75 degrees.
This is why the capsule above (which has like 15 degree between each faces) doesn’t seem to have “cut corners” while the cylinder and cubes have their corners “cut”. (The 2 cubes have different corner because I tried the 2 common methods of handling the MATRIX data when generating the outline. One had better results, but still was not a winner.)
The only viable solution to this problem has been to implement workaround within the shader that use another set of data that explicitly affect the normal post-MATRIX-calculated so that they are at the right size and right position.
To give an example, the famous JMO’s Toony Color Pro asset had to implement a system in its asset (called Smoothed Normals Utility) that generate the relevant data and store it in the model’s Vertex Color, Tangents or UV2 and even that doesn’t fix everything since some view angle still break. (And you got to sacrifice that type of data for the fix. If you use Vertex Color for customization or, like I did, to affect things like the outline and highlights/Shadows, you got to use Tangent or UV2 and, sometimes, the effect isn’t better since each store data differently.)
That’s the first issue I faced and I did went into one hell of a stress & depression phase when I discovered it as I upgraded to Unity 2018.3x.
Issue B : It was incompatible with many part of the Engine and many useful tools.
The only kind of effects and tools that are compatible with it was things that do their effect post-rendering. This issue, which I was working on until I faced the Issue A previously explained, is highly apparent when I made an actual scene. The trick when you use a shader that is relatively unique is that you want to keep that style all around.
To give a good example, The Legend of Zelda: Wind Waker did it really great :
While you can see that there’s a slight difference in details (like in the trees) between the character and the static environment, you don’t really see a major change and it makes you feel like the character truly belong in that world.
If you want something with more details, Killer is Dead is one hell of a great example :

You can see that both the characters and the environment had a similar visual style. The only part of the environment that is the highlight around edges which, I guess, was kept to allow the player to better notice the characters in the environment. (Kinda like an outline)
To give a example of a bad implementation :
I could point toward every Sword Art Online games out there, but the one who truly stroke me was Fatal Bullet as it uses colorful characters with strong highlights and shadows (like the anime), but has an environment with 2x to 4x more “real-like” details and clearly not the same kind of highlight and shadows. It makes the character looks like they were Photoshopped into the scene.
So, all this to explain that I had quite some issue to manage the visual between the close/small assets and characters with the environment since I wanted to keep the look relatively constant between both.
For example, I couldn’t bake the lighting because Unity’s lighting systems (both Processive and Enlighten) doesn’t take into consideration the shader’s way of handling the resulting shadows when it involved screen space or anything like that. This means that I couldn’t make use of static lighting at all. The only way of going around this would be to use a image post-rendering effect that store, during each frame, the lighting result (basically, a grayscale image of the screen view with the shadows and highlights results) and replace the rendering of the lighting on the screen (in deferred) with the one cooked up in the effect. So, in other word, a completely different approach and all my work in the shader is useless in this matter.
While I was able to implement the GPU Instancing within the shader, the optimization it gave me was sub-part.
If I was to use this shader in the current common ways when it comes to how most make their scene in Unity, I was ending up with a tons of drawcalls (way too many) for barely anything to show for it. For all the VRAM I was saving with small simple textures, I was using a lot of RAM in drawcalls and the rendering pipeline was truly becoming a mess with noticeable hold-up during specific parts of the rendering stages that was easily traceable within the Profiler.
Trying it out on an old Laptop I keep for the sake of testing gave me some good data about how “far” I could go with the shader. The shader could work well with game with a small line of sight such as an isometric Action RPG (Diablo style) or even tactical games. Even on mobile, it works relatively well for something like city simulator with a grid of 20x20 “slots” filled with city parts. The shader wasn’t a total lost as long as I kept the rendering relatively short sighted.
Still, my 2 projects on which I work on that required this shader are made with, at some part, long range panoramic views of a detailed and animated backgrounds.
The only way of making things works properly was to make an heavy use of the LOD system so that the models keep changing based on the distance with the camera so that I can save as many lighted triangle as I could. Some asset had to have at least 6 LOD and, each, with different models that had to be modeled and UV unwrapped manually. It’s literally like making the same 3D model 3-6 times from, almost, scratches since the “fake” lines modeled had to be modified each time. A lot more of time required to make the model and textures together than if I was to make use of a more conventional PBS shader that works with Unity lighting system.
So, what I did was simply revert back to a more “flat” anime-like look and, to save a lot of time and avoiding the Issue B explained above, I used one of the shaders that come with Toony Color Pro as a base and modified quite a bit of it to fill my own needs. Since it also come with a somewhat-fix for the Issue A, it’s a win-(relatively)win for me to have switched gears.