Beyond wrinkle maps to realtime tension maps. Current state of the Unity possibilities?

Hi, I’m new here and hope I am in the right place.

I’m interested in facial animation and in particular I wonder what is the state of the Unity art when trying to make a tension map shader (if that is the right question). My readings suggest a tension map is a better implementation than wrinkle maps.

there’s this reference from 2012 about a curvature map (without the code) news

“This curvature calculation is actually a serious problem with implementing the technique in Unity. This is because ddx and ddy (fwidth(x) is just: abs(ddx(x)) + abs(ddy(x))) are not supported by ARB, which means Unity can’t use the shader in OpenGL.”

In this thread about wrinkle maps user @abatcat
references some tension map resources [WIP][BETA] SDS Wrinkle Maps

And that’s about all I can find.

If you know of any shader code or resources I could look at would you please let me know?

Thanks

Moot issue at this point, unless you’re looking to support OpenGL ES 2.0 (older mobile devices). Derivatives are supported by all graphics APIs Unity supports apart from GLES 2.0. However curvature mapping is more about basic skin shading than wrinkle maps, and is plausibly better done with a texture anyway.

As far as wrinkle maps are concerned, the most recent thing I can think of (apart from the thread you linked to) is the official Unity stuff for their Blacksmith demo … which as a warning probably doesn’t work anymore unless you get Unity 5.2 or 5.3.
https://blogs.unity3d.com/2015/05/28/wrinkle-maps-in-the-blacksmith/

Tension maps are basically a way to drive the blending in of wrinkle maps rather than a replacement. In the method described for the Blacksmith demo they basically had multiple versions of the face’s normal maps, like relaxed, surprised, and angry, and would blend between them depending on the blend shapes being used and masked by specific regions.

Tension maps side step that manual masking and blending in favor of calculating a per-vertex tension, or perhaps more accurately per vertex compression, to blend in a single wrinkle map. I don’t know of any Unity shader code available for this, either for purchase or for free. The closest I found was this:

https://www.youtube.com/watch?v=M-EkvgX4DeE

(Warning, many videos on this channel are NSFW due to the author’s primary character being a nude woman)

If you watch that video there’s some quick glimpses of his code, but not enough to really know exactly how it’s being done. In the comments he makes this statement:

That’s not a ton of info, but in some other videos he makes reference to Keijiro’s Skinner asset for storing information in UV space, but I don’t know if he uses that method still.

I’m always wondering how he did it though, also kinda surprised why wrinkle maps topic are so minimum

Thanks bgolus and Reanimate_L for your replies.

Yes I’ve seen that guy’s videos and tried to extract some information but, fair enough I guess, he doesn’t give many details apart from (a somewhat maddening) “It’s easy. It’s simple”. Clearly a dude/tte with some serious coding chops.

As far as I can tell the Blacksmith’s wrinkle maps don’t work on 2017 which I am running.

Thanks again bgolus for clearing up my somewhat random snippets gathered from the internet.

I am surprised as well Reanimate_L that tension maps aren’t more widely discussed. I guess most game developers are satisfied with blendshapes. I’m hoping to make films/videos/movies with Unity. (Sounds grandiose but my goals are modest) and believe that facial animation is a most valuable secret sauce to getting some emotional connection between developers and audience.

I’ll have a look at the Skinner shaders and see if anything pops up. (I’m actually feeling the elephant without any idea of where I am yet)

Cheers

For any visitors from the future here’s some notes collected from the pages of the producer of the video referenced above. Another warning that many of those videos are NSFW.

  1. Mesh modify for tangent average. ( Same position → same tangent )
  2. Bake blendshape to float texture order by vertex id.
  3. varying vertex id in vertex shader.
  4. passing vertex id in hull shader.
  5. vertex position modify in domain shader.?

Red channel is Curvature.
And Green Channel is auto-generated Squeeze Tension.

In editor time , Bake edge length to map. In Runtime , Compare current edge length at hull shader. It is very simple.

curvature + lerp ( 1 , baked curvature , tension )
for wrinkle
tension is most high edge skill in game character. I heard EA is using it for their games.But not exact. Skinning character is too low quality , but hundred of blendshape is not for game characters.Tension system replace hundred of blendshapes to only two blendshapes. Squeezed shape , and Stretched shape.every vertex choose suitable shape for their tension value. But WIP in now

There is no wrinkle weight map.
Just check edge length and compare original edge length.

curvature calculated in hull shader.
and needed some blur.

Edit: Wowsers! Didn’t realise text links to Youtube would go live and be embedded! Removed.

Totally forgot about this thread, Geez that is a very elaborate tech for an NSFW games. . .

I have done that for my masters degree in computer graphics in 2008 using two different strategies, wrinkle maps and tension by deformation of the model. The wrinkle maps I’ve published a paper (https://dspace5.zcu.cz/bitstream/11025/11110/1/Reis.pdf), however the tension version I only have in portuguese (http://repositorio.unicamp.br/bitstream/REPOSIP/259372/1/Reis_ClausiusDuqueGoncalves_M.pdf). I’ll update my approach using unity…

Thanks for your reply @clausiusreis I had a look at your thesis using Google Translate.

As I understand it (very poorly) your steps for Real Time Wrinkle Maps are:

1: Determine the Area/s of Interest
2: Create a Normal Map with Wrinkles in the Area of Interest
3: Save the Relaxed Pose vertices
4: Animate the face
5: Get the Motion Vectors in the Area of Interest
6: Apply the Smoothing function
7: Calculate the blend between the Relaxed Pose and the Normal Map
8: Render face with blended Normal Map

I’m not clear how the Motion Vector velocity over time determines the weight of the Normal Map (if I understand correctly). Does this mean if I smile slowly vs I smile quickly then the weight of the blended normal map will be different?

Cheers!

That’s pretty much it @ghtx1138 , the normal maps at the time were created manually by an artist, now we have tools that create them automatically based on simulations due to facial movement.

The calculations, simply speaking, consider a start point for each vertex (relaxed state) and a direction vector for each vertex, pointing to the direction that would produce wrinkles (Not simulation). For any vertex displacement, the distance from the relative relaxed state to the current state is computed (Fig. 4.8-A), and attenuated giving an angle with the original displacement vector (Fig. 4.8-B). If the current vertex position is inside the “cone” of the displacement vector, wrinkles are displayed (Fig. 4.9). The calculations found on my thesis (2nd method) will result in a scalar from 0 to 1 (0% to 100%) that we use to smooth the wrinkle map shader over the face.

Download links:
Fig 4.8 and Fig 4.9
Face models expressions without shader
Face model expressions with shaders and resulting wrinkles

I will produce an example in Unity with more up-to-date scripts and post the link here. When I finished my thesis I’ve created GLSL shaders manually… but hey, worked on low spec PCs! :smiley:

1 Like

Hi @clausiusreis
Thank you for taking the time to describe how it works in detail and for posting the pics.
I really look forward to seeing your project. I’m using the HDRP shaders at present but I don’t really know much about how they work.
I’m trying hard to remember what computers in 2008 were like. I’m guessing I had an old 486! :slight_smile:
Cheers!

That’s pretty great, @clausiusreis ! Looking forward to seeing this in action in Unity. I’ll be watching this thread for it!

Hi

Hi clausiusreis

We are working on a digital avatar in HDRP in Unity and looking for a wrinkle solution. I see here in your post you mention possible posting a link to a Unity example script. Is that something you might still be able to do for us ? We would really appreciate your help with this critical piece of the digital avatar solution. Thanking you.

Hi all,

Slight necromancy on this thread as I’ve found one way of doing it. UnityXGamerMaker’s videos (above) were definitely a big help. I tried a few variations on this until I found a version I was happy with.

glassembarrassedbluebottle

First off, we need to bake the triangle edge lengths to a texture. This can be done with the following code, either on awake/startup or at edit time.

        Mesh _mesh = GetComponent<SkinnedMeshRenderer>().sharedMesh;
        Vector3[] verts = _mesh.vertices;
        int[] triangles = _mesh.GetTriangles(0);
        int triCount = triangles.Length / 3;
        Texture2D triangleLengthTexture = new Texture2D(triCount, 1,TextureFormat.ARGB32, true);
        triangleLengthTexture.filterMode = FilterMode.Point;
        triangleLengthTexture.wrapMode = TextureWrapMode.Clamp;
        for (int i = 0; i < triCount; i++)
        {
            float l =
                (verts[triangles[i * 3]] - verts[triangles[i * 3 + 1]]).magnitude
                + (verts[triangles[i * 3 + 1]] - verts[triangles[i * 3 + 2]]).magnitude
                + (verts[triangles[i * 3 + 2]] - verts[triangles[i * 3]]).magnitude;
            triangleLengthTexture.SetPixel(i, 0, Color.white * l);
        }
        triangleLengthTexture.Apply();
        GetComponent<SkinnedMeshRenderer>().material.SetTexture("_TriangleLengthBuffer", triangleLengthTexture);
        GetComponent<SkinnedMeshRenderer>().material.SetFloat("_TotalTriCount", triCount -1);

Then, for the shader code, use a geometry shader to calculate the triangles’ edge lengths. we use the System-Value semantic SV_PrimitveID to get the triangle index, and then sample our created texture using tex2Dlod in the geometry shader.

[maxvertexcount(3)]
void geom(triangle v2g IN[3], inout TriangleStream<g2f> triStream, uint fragID : SV_PrimitiveID)
{
    g2f o;
    float l = distance(IN[0].vertex, IN[1].vertex) + distance(IN[1].vertex, IN[2].vertex) + distance(IN[2].vertex, IN[0].vertex);
    float originalLength = tex2Dlod(_TriangleLengthBuffer, float4(((float)(fragID)) / _TotalTriCount, 0.5, 0, 0));
    float diff = (l - originalLength * _SquashStretchOffset)
    for (int i = 0; i < 3; i++)
    {
        o.worldPos = IN[i].worldPos;
        o.normal = IN[i].sNormal;;
        if (diff > 0)
            o.triLength = fixed2(0, pow(_StretchBlendStrength * (diff * 50 - _StretchBlendThreshold), 3));
        else
            o.triLength = fixed2(-pow(_SquashBlendStrength * (diff * 50 + _SquashBlendThreshold), 3), 0);
        triStream.Append(o);
    }
    triStream.RestartStrip();
}

Then you’ve got the difference in triangle length stored in the g2f. For mine, I wrote squash to red and stretch to green. In the fragment, this gets manipulated a little further by sampling against the curvature. Curvature is done in the fragment shader like this (though come to think of it, it’d probably be better within the vertex shader):

fixed4 frag(g2f i) : SV_Target
{
    float curvature = clamp(length(fwidth(i.normal)), 0.0, 1.0) / (length(fwidth(i.worldPos)) * _TuneCurvature);
    return fixed4(saturate(i.triLength.x), saturate(i.triLength.y), saturate(curvature), 0);
}


float fwidth(float x)
{
    return abs(ddx(x)) + abs(ddy(x));
}

My shader uses a second pass, mainly so that I can use surface shader inputs/outputs without too much extra work on the lighting. But it also allows us to do some brute force gaussian blending to soften the transition between stretched and still triangles.

void surfVert(inout appdata_full v, out Input o)
{
    UNITY_INITIALIZE_OUTPUT(Input, o);
    float4 screenUVs = ComputeGrabScreenPos(UnityObjectToClipPos(v.vertex));
    o.squashStretch = tex2Dlod(_GrabTexture, float4(screenUVs.xy / screenUVs.w, 0.0, 0.0));
    o.squashStretch += tex2Dlod(_GrabTexture, float4((screenUVs.xy + fixed2(0.0001, 0.0001)) / screenUVs.w, 0.0, 0.0));
    o.squashStretch += tex2Dlod(_GrabTexture, float4((screenUVs.xy + fixed2(-0.0001, 0.0001)) / screenUVs.w, 0.0, 0.0));
    o.squashStretch += tex2Dlod(_GrabTexture, float4((screenUVs.xy + fixed2(-0.0001, -0.0001)) / screenUVs.w, 0.0, 0.0));
    o.squashStretch += tex2Dlod(_GrabTexture, float4((screenUVs.xy + fixed2(0.0001, -0.0001)) / screenUVs.w, 0.0, 0.0));
    o.squashStretch = fixed4(saturate(o.squashStretch.x), saturate(o.squashStretch.y), saturate(o.squashStretch.z), 0);
}

squashstretch.b is where we’re storing the curvature from earlier, so the wrinkle value that looked the best to me was:

float squash = lerp(0, IN.squashStretch.b, IN.squashStretch.r);
float stretch = lerp(0, IN.squashStretch.b, IN.squashStretch.g);

This gives us a color output for squash and stretch something like the below:

giganticharmoniousbilby

Then, it’s simply a matter of blending the normals based on these values. Hope this helps anyone in the future who’s looking into this :slight_smile:

7 Likes

More necro, but I wanted to say thanks for actually coming back and posting your findings, I’m going to try and see if I can replicate this in HDRP on my end using your findings and info etc, and post if I can get something going.

I want to try and do it in shader graph just to see if it’s even possible with what they have currently, and also because writing shaders by hand feels like pulling teeth with a shotgun. Wish me luck.

Thanks again MrArcher!

1 Like

Renecroing the necro of the necro to say thanks, MrArcher! This is a very helpful example! Curious as well whether Shadergraph will support something like this. ^.^

as this technique are using geometry shader it wouldn’t be compatible with shader graph at the moment (unless you are ready to modify the shader graph and SRP)

1 Like

Do I miss something here, or is there now a better way to acheive this using bump maps composition?

I’ve read about it on the Unity blog post. What do you think?

Surface gradient based normals are great for compositing normals with different projections / UVs, or when dealing with procedural geometry where calculating correct vertex tangents would be difficult. That’s not really a problem here.

If you’re blending two normal maps, then Reoriented Normal Mapping is the current state of the art for blending together two normal maps with the same UVs. The more common “whiteout” or “udn” style normal map blending that most things use is more than good enough and most people aren’t going to see a difference.

Most of the time when doing wrinkle maps you’re fading between an unwrinkled and wrinkled variant or the normals rather than having a base unwrinkled normal map and blending an only wrinkles normal map on top. In that case lerping between them is fine. I’m not sure if surface gradient based normals infers any benefit here.

If I understand correctly, surface gradient based normals are overkill since wrinkle maps composition is mostly “additive”?

I will document on the Reoriented Normal Mapping subject then.

It’s not so much that it’s overkill, but rather there are no benefits (and may actually have more issues) over the more traditional approach.

1 Like