Beginner in Graphics - On a Quest to do the foliage from the Witness

Hey everyone!

I’m going to preface this with the fact that I have scoured the world wide web in search of answers on this one, and I’ve actually found some. But the issue is I either can’t seem to get them to work correctly, or I don’t fully understand them. I’m new to graphics relatively, I have a background in CS, and I’ve been learning shaders and CG for a couple months now. I’m also not a bad modeler and I’m super familiar with unity on the editor and C# side.

So anyway, I just cannot get this effect to work. First of all, a picture of the witness:

So these look super cool, I’ve seen this look in a few other games, etc. They give this feel of almost an impressionist painting - it’s awesome. So now into what I have heard about how to do this:

  1. You gotta correct your normals with something like normal theif in 3dsmax or normal edit in blender
  2. Write a shader that essentially drops the alpha in relation to the view direction (normal dot view dir) to fade out quads with their edges facing the camera
  3. Add to that shader AlphaToMask with 4x MSAA

So here’s what I’ve got:

  1. Corrected normals with standard surface shader (This looks kinda good but trust me, this is a perfect angle, the whole seeing the quad edges is a huge issue here, plus some other thing I’m not stoked on)

So first off, you can still totally see the quad edges, each seems to basically have their shadows, making it rather obvious we have a bunch of quads on a stick. Right off the bat - should the normal transfer (from an elliptic uv sphere) be fixing that? did I do that part wrong somehow? This is clearly way better than the quads without doing the normal correction I did. It’s like if we could just blend/smooth this all out somehow it would work pretty well… But still In the witness and other games like it, you seriously can barely tell they are using quads with leaf textures - how do they get that? Possibly the AlphaToMask thing? Let’s continue…

Okay lets’s try that shader idea…

Shader "Nature/Witness/leaves"
{
    Properties
    {
        [PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {}
        _Color ("Tint", Color) = (1,1,1,1)
        _Cutoff ("Shadow alpha cutoff", Range(0,1)) = 0.5
        _EdgeTransparency ("Edge Transparency Factor", Range(0,1)) = 3
    }
    SubShader
    {
        Tags
        {
            "Queue"="AlphaTest"
            "IgnoreProjector"="True"
            "RenderType"="TransparentCutout"
            "PreviewType"="Plane"
            "CanUseSpriteAtlas"="True"
        }
        LOD 200
        Cull Off
        Lighting On
        AlphaToMask On
        CGPROGRAM
        #pragma surface surf Lambert addshadow alphatest:_Cutoff
        sampler2D _MainTex;
        fixed4 _Color;
        float _EdgeTransparency;
        struct Input
        {
            float2 uv_MainTex;
            float3 viewDir;
        };
        void surf (Input IN, inout SurfaceOutput o)
        {
            fixed4 c = tex2D(_MainTex, IN.uv_MainTex) * _Color;

            // normal shader
            o.Albedo = c.rgb;
            o.Alpha = c.a;
            o.Alpha *= 1.0 -  (dot (normalize(IN.viewDir), o.Normal)*_EdgeTransparency);

            // this is what I used in the test video with the white/black showing normal dot view
            //o.Albedo = 1.0 -  (dot (normalize(IN.viewDir), o.Normal)*_EdgeTransparency);;
            //o.Alpha = 1;
        }
        ENDCG

    }
    Fallback "Legacy Shaders/Transparent/Cutout/VertexLit"
    Dependency "OptimizedShader" = "Hidden/Nature/Tree Creator Leaves Fast Optimized"
}

So here’s my shader - pretty straightforward, using a surface shader, gunna drop the alpha based on the normal-dot-view like was proposed across the internet I think first appearing on the witness dev blog.

But here’s the issue. I corrected all the normals. So it’s not going to fade out the weird edges of the quads anymore, it’s just gunna fade out whatever is right in front of me. With uncorrected normals, this technique actually works BETTER. Here’s a test I did to show my math is working, this is just passing that dot calculation line straight to albedo:

So this is basically what I just said would happen - it’s just fading out based on the corrected normals, which is not really the reason we were doing this whole fade out edges quads. Basically this is just ends up cutting out a piece of the tree in front of the cam it looks exactly like we have the camera’s clipping distance set super high. AlphaToMask is in there, from what I can tell that was basically inconsequential.

So from what I can tell, the standard shader is working the best - is there something I’m not getting about this technique? How do you get that billboard-esque effect without actually billboarding and having all the terrible artifacts that would come with it? It looks so simple, hence why I thought I might try it, but it’s turning out to be quite a mystery…

Any help would be greatly appreciated! Thanks in advance!

There are some pretty interesting reads on the official blog - be sure to check out the comments:

1 Like

As far as I remember normal bending is a big part of the look of the witness trees ( also look in the vertex normals section in this link : Foliage - polycount ), so you would have to manually edit your trees, or create your own, to do that.

1 Like

Thanks for the response!

Yeah I’ve looked through those - Basically thats where I’m getting #1 and #2 from -

#1 (Correcting Normals)

#2 (Normal Fadeout Shader)

The issue I’m having is that doesn’t seem to make sense - if your normals are corrected to a sphere, they aren’t going to be very helpful in telling when a plane is at an unflattering angle because they no longer represent the way that plane is facing, they represent the way the sphere is facing, which is always out towards the viewer (not what we want).

Hence, I’m assuming I’m just not fully up to speed with what might be going on here - or do you think this is weird too?

Hmm fair point. Maybe they had two sets of normals? One for lighting and one for fades? You could potentially pack your normals into vertex colors Unity and then have two sets to work with as well?

Yeah I have the normal thief thing working from here Foliage - polycount theres a good tutorial for blender if anyone in the future is reading this thread

https://forums.unrealengine.com/community/community-content-tools-and-tutorials/125884-spherical-normals-for-trees-blender?152750-Spherical-Normals-for-Trees-(Blender)=

it helps like A LOT - but in my first pic its just not blending right or something, still lots of sharp edges etc.

That’s not a bad idea - I’ll try that out - but honestly I’m pretty sure there’s some type of blending thing I’m not doing right - I’ve heard tales of Alpha to Coverage being used… Not really sure how that works or what it does, but simply turning on AlphaToMask in my shader and turning on 4x MSAA didn’t really do anything - basically there’ some way they are softening the edges between the planes post vert (not sure if thats a term but basically I mean after the triangulation part of the pipeline) - My thought is that’s more important than the normal based fade outs.

(for the record, I just tried the normal fading thing on a normal bent tree I have and it looks ridiculous, as we all guessed it would, just confirming)

About the rest. some thoughts:

Maybe self shadowing should be turned off? (maybe turn shadows off in general for testing). If we want to avoid any harshness, shadows from direct sunlight are probably not a good idea.

About the edges: Maybe he is using alpha blending instead of alpha testing? There’s even some technique where you alpha test for z-testing and then at the very edges you alpha blend. I think you can simply do it in two passes.

(old link but I couldn’t find anything better right now : Alpha Test with Alpha Blend? )

Another thing I heard was possibly turning off casting shadow and using an invisible object to cast the shadow onto the leaves - I’m gunna try a couple of these things and get back with the results

With receive Shadows

Without Receive Shadows

So that’s progress

I did a super hacky implementation of the view angle fading and it seems to help a bit:

With fading
3400892--267706--with_fading.jpg
Without Fading
3400892--267707--without_fading.jpg

(maybe open them in new tabs and switch back and forth so it’s more obvious what changed :wink: )

wait do you mean the double normals idea or the edge blending 2 pass thing

The double normals, without really using double normals though. I use baked lighting and no real time lights at all, so after baking (with my somewhat bent normals), I just recalculated the normals in the mesh importer and tweaked my shader :stuck_out_tongue: (this is why I said super hacky). It’s by no means a proper solution.

I just wanted to see how it looks. It works okay. Although I’m not sure if it’s worth it in my case. The leaves are all alpha blended, so there’s no proper z-order, and when the camera is moving, it looks like a mess.

So fading the leaves that are at an angle seems like polishing something that has otherwise very fundamental problems. Plus it’s for a mobile game and I don’t want to spend performance if I don’t have to.

Yep, The Witness is using two normals. It’s possible to encode both of these into the mesh in an external program by storing the normals in the vertex colors like @AcidArrow suggested, but you can also calculate these in the editor and store them in an extra UV channel, or calculate them in the fragment shader with out anything else!

float3 positionToNormal(float3 position)
{
    float3 dpx = ddx( position );
    float3 dpy = ddy( position ) * _ProjectionParams.x;
    return normalize(cross(dpx, dpy));
}

The normal will be whatever space the position is in. The normal might be flipped vs the actual surface normal, but that’s not an issue for this use case since you just need the absolute value of the dot product. However there’s one issue with using that code in a Surface Shader. Unity’s Surface Shader is overzealous in it’s optimizations and won’t pass the worldPos to the surf function if only used by that function! Basically Surface Shaders will fight you every step of the way here and you’ll have to pass the data you need yourself with a custom vertex function. I’d recommend encoding the normal in the vertex color instead if you’re going to stick with surface shaders.

Alternatively you could encode one of the normals in a UV channel using an external program using some form of 2 channel normal encoding, like spheremap or octahedron encoding. Basically Unity will only use the first two components of the UV when importing a mesh so you can’t just store the full vector even though most modelling programs do actually support 3 component UVs.

I wrote a lot about Alpha to Coverage here:
https://medium.com/@bgolus/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f

I mention Surface Shaders briefly near the end. I’ve got a second part that’s been in the works for the last few months I may never finish, but it goes a bit more deeply into the issues of using alpha to coverage in a Surface Shader. Again, they’ll be actively fighting you as Unity never expected AlphaToMask to be used with a Surface Shader. The short version is you can’t use the addshadow or alphatest keywords as they both prevent AlphaToMask from actually doing what you expect. Specifically the alphatest keyword does the clipping, then forces the output alpha value (which Alpha to Coverage needs) to 1.0 instead of the alpha you set, and the default shadow pass always outputs an alpha of 0.0 and the shadowcaster pass addshadow generates doesn’t know to turn AlphaToMask off.

1 Like

For Wayward Sky I disabled shadows casting of bushes, but kept shadow receiving, which fit with the style of that game. It also meant the bushes would still get shadows cast on them. Because Wayward Sky is a VR game the lack of shadow casting for grounding them is less important, but having them be fully lit all the time was important.

An alternate solution I investigated was to modify the shadowcaster pass (which is used for both shadow casting and receiving by Unity) to push the surfaces towards the camera when calculating shadow receiving. This can lead to some minor swimming of the shadows as you move around the object, and may cause you problems if you use the camera depth texture for other purposes. That swimming was also way too noticeable for VR to be usable.

Another hack would be to have your bushes use entirely separate geometry for shadow casting. Basically you’d need to have a blob mesh that surrounds the bush and has its faces inverted with some kind of bush texture on it that roughly matches your bush’s shape. Won’t be exact, but will give you ground shadows with out the issues of self shadowing.

The Witness sidesteps the issue entirely by changing how they handle shadow maps on foliage. There are two main ways to do shadowmaps, one is to sample the shadow map using a hardware compare sampler which gives you bilinearly softened shadow edges. When you sample the shadowmap in the shader you pass in the position in the texture for both the UV and the depth and the hardware does the comparison for you. This is what Unity does, as well as doing multiple samples to soften it further if soft shadows are enabled. The alternative is to just get back the raw depth value from the shadow map and do the distance comparison on your own. This is a little slower and you don’t get the bilinear softening, but you can do stuff like fade in the shadow rather than be a binary on/off in the depth compare. The Witness appears to use this second method on foliage which gives an additional volumetric appearance to the shadows too, but they use the other method for everything else.

It is not trivial to modify Unity to do this. And the most straight forward ways are an all or nothing change.

Thanks for the response - I’ve read a bunch of your posts across the web in relation to this topic (that’s why I asked about Alpha to coverage and the invisible blob shadow) so thanks for taking the time to respond yet again about this! haha! I’m just starting to get into this stuff so It’ll take me a while to unpack all this new info - probably end up with some more questions lol.

Well, I went and learned how the z buffer works, painters algorithm, and all this unsorted transparency stuff - it is pretty annoying. After that I just decided to start from scratch. Wrote basic vert-frag (from bgolus’ advice so there’s no more dealing with all this under-the-hood unity stuff). Started with an unlit shader, added diffuse shading and a hard coded an alpha test (so I could use blending if I needed to)…

That… Um, well surprisingly, that looks pretty good? I’m kind of astounded I didn’t try this yet - literally have like 20 shaders in this project and none of them was just a simple hand-made diffuse shader. I think it’s ready AcidArrow’s normal hack now… going to use bgolus’ math to calculate the old normals and see what happens.

3 Likes

That looks pretty great!

Just in case you need it, (although you probably won’t, since you seem to know your stuff) my hack was doing this in vert:

                half3 viewDir = normalize(ObjSpaceViewDir(v.vertex));
                o.rim = abs(dot(normalize(v.normal), viewDir));
                o.rim = smoothstep(0.2,0.25,o.rim);

And then c.a *= i.rim; in frag.

I used actual normals, so switch v.normal with your decoded ones.

It’s pretty basic, maybe there are better approaches to it. Also those smoothstep values are pretty important and probably need a lot of tweaking and testing to see what works best.

So I got this working - the issue is while it looks fine in a still shot, when you move the cam it’s pretty obvious these edges are disappearing - basically because I’m using a cutoff, it’s all or nothing - so we finally come to a need for blending. The only reasonable method I saw of order independent transparency was stochastic blending with MSAA… So I’m gunna try to get that working.

Note, as far as performance goes, it may be worth it to just stick this the last thing I posted - doing this normal recalculation has to happen on a frag level for some weird reason (not sure, I just got an error if I called it in the vert) and then now we’re turning on 4x MSAA just for this blending thing - my guess is for most purposes the above picture was pretty good.

Regardless, I’m gunna see what happens just to see

To clarify - we need the bending working for the normal fading to actually fade, not just cut.

Derivatives (what those ddx and ddy functions are) only work in the fragment shader. See the alpha to coverage article I linked to above. I go into what they are and why they only work in the fragment shader.

An alternative that a lot of AAA games use (including the witness when the quality setting is lowered!) is a dithered fade. You can either use some kind of bayer dither or a noise texture / function. Unity actually has two built in bayer dither textures it uses for things like LOD fading or dithered shadows.