The Quest for Efficient Per-Texel Lighting

Hello Unity community! I am writing a 3D, first person game that makes use of low res, low color, unfiltered textures and I looking for a way to light the game that doesn’t spoil the charming retro look of the art.

My plan was to light the scene conventionally, using unity lights and the unity standard shader, then create a posterization post-process effect. That was relatively easy. I created a function to map any RGB value to its nearest perceptual match in the game’s palette and the results were just what I’d hoped.

As long as you don’t look too close anyway! Look at these ugly artifacts.

The lighting calculation is occurring per fragment, so when I go to posterize the result, I end up with these jagged looking patches that break up the pixel grid. Rendering low res works, of course, but it introduces all kinds of other artifacts as things scale. I really want to keep my high res render. What I need to do is calculate the lighting not per-vertex or per-pixel, but per-texel, to preserve the pixel grid in the sprites.

One technique would be to edit the standard shader’s forward path like so:

half4 fragForwardBaseTexel (VertexOutputForwardBase i) : SV_Target
{
    UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);

    FRAGMENT_SETUP(s)

    // 1.) Snap fragment UVs to the center of the nearest texel
    float2 snappedUVs = floor(i.tex.xy * _MainTex_TexelSize.zw) + 
                  _MainTex_TexelSize.xy/2.0;
    // 2.) Transform the snapped UV back to world space (aka ???)
    float3 snappedPosWorld = float3(); // TBD: Transform the point!
    // 3.) Assign the snapped world position for use in lighting calculations
    s.posWorld = snappedPosWorld;
   ...
}

That way shadows, light source falloff, reflectance, everything will snap to the pixel grid. Seems like a really tidy idea. It also has the nice property of not overcalculating lighting when things are far away and texels are smaller than screen pixels.

But I cannot for the life of me figure out how to get the transform I need for step 2! The more I learn about the rendering pipeline, the more I worry that this might not even be conceptually possible. How do I go from an arbitrary UV location to world position in the fragment shader?

And beyond that, has anyone worked with or thought about per-texel lighting before? I am very open to other suggestions for how to accomplish this effect. Light maps calculated per frame, deferred rendering tricks, abandoning the standard shader and working on my own from scratch… I am open to anything. Thanks!

1 Like

I’m maybe making some progress? I learned, perhaps erroneously, that “tangentSpace” is another term for UV space on a polygon and that a tangentToWorld transform is available to the fragment shader. So I tried extracting that and mapping my snapped UVs to world space and it seems like something happened but nothing great. Now the attenuation of light seems to depend entirely on world position of the light and nothing else… I guess maybe this matrix is just for rotating vectors and doesn’t properly handle translations.

I also noticed my snapped UV calculation was wrong so I fixed it.

The current code:

half4 fragForwardBaseTexel (VertexOutputForwardBase i) : SV_Target
{
    // 1.) Snap fragment UVs to the center of the nearest texel
    float2 snappedUVs = floor(i.tex.xy * _MainTex_TexelSize.zw)/_MainTex_TexelSize.zw + _MainTex_TexelSize.xy/2.0;
    // 2.) Get the uvToWorld transform
    float3x3 uvToWorld = float3x3(i.tangentToWorldAndPackedData[0].xyz,i.tangentToWorldAndPackedData[1].xyz,i.tangentToWorldAndPackedData[2].xyz);
    // 3.) Transform the snapped UV back to world space
    float3 snappedPosWorld = mul(float3(snappedUVs,1),uvToWorld);

    UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);

    //FRAGMENT_SETUP(s)
    FragmentCommonData s = FragmentSetup(i.tex, i.eyeVec, IN_VIEWDIR4PARALLAX(i), i.tangentToWorldAndPackedData, snappedPosWorld);

    ...
}
1 Like

The idea is a sound one. The technique of doing lighting per texel is called “texel space shading” or sometimes “object space rendering”. There have been a couple of demos in the past from Nvidia and AMD that do this, and I’ve seen a few games use this for limited special case situations. The only game I know of to use it extensively for everything is Ashes of the Singularity. It’s pretty close to how Pixar’s Renderman used to work when they were still using REYES, though that’s actually closer to per-vertex lighting, just that a big part of REYES rendering is that it tessellates everything in view to have triangles that are about the size of one or two pixels.

How this is usually accomplished is a bit more complicated than “just modifying the shader”, and requires completely changing how the lighting system works. Every surface renders itself into a render texture in UV space, the resolution of which is pixel match to that surface’s texture’s texels, or some factor depending on some kind of LOD. You could implement this in Unity, but it would mean rewriting everything from the ground up.

So, you probably don’t want that.

Note that this statement would be true if you went with true object space rendering. But it’s not true with your proposed solution as the lighting is still being calculated for every pixel.

So, how would you do what you want to do above?

The easiest potential solution would be to simply quantize the position in world space to a constant size grid, but that assumes your game is all axis aligned surfaces with a constant texel density. It also doesn’t work as well as you might expect as the actual surfaces of the meshes are usually at the halfway point between the world space grid rather than at the center. That can be solved by simply offsetting the position by the surface normal, but still doesn’t solve the case of non-axis aligned surface.

Really you need to figure out how much UV space is changing, and in which direction it is aligned, in world space. The vertex tangent and bitangent (aka binormal) are the world space directions aligned to the texture UVs, but those values are normalized so they can’t tell you how much the UVs are changing in those directions. Those two along with the normal is the tangent space transform matrix. You can get something close by taking the tangent and bitangent and scaling them by a value you set on the material assuming your texel density is constant in world space. But there’s another way.

Screen space derivatives.

You can find out how much the UVs and position is changing in screen space, or more specifically between two on screen pixels, and from that derive both how much and in which direction in world space the UVs are changing. Then you just need to know how far from the snapped UV you are in UV space and you now have the texel’s center in world space!

So how do you do this?
https://discussions.unity.com/t/594458/6

In that post I present two functions for creating a tangent to world matrix. The second one that does not take a normal is the one you want, and the T and B on lines 34 and 35 should be the world space U and V directions and distances. I saw should be because I might be wrong.

However, armed with those two vectors and the UV space distance from the texel center, you can find the world space texel center.

float2 offsetUV = snappedUV - originalUV;
float3 snappedWorldPos = originalWorldPos + offsetUV.x * T + offsetUV.y * B;

One thing to note, derivatives aren’t perfect, and they’re only valid for within a single triangle. The texel center will be calculated as if it’s on the same plane as the tri currently being rendered. So if you have texels that are across a polygon edge both sides will not have the same center position. In fact some may calculate their position as being someplace not even on the polygon surface.

The only truly accurate way to do what you want is to bake out the world position of each model into a texture. Alternatively and more realistically you could bake the local model space position and transform it into world space just like you would with a vertex position. You’d have to not use batching at all to do that though as you’d need to retain the original transform for each mesh, and you can’t use meshes with tiled or non-unique UVs. Basically you’d need light map UVs for your meshes that’s at the exact same texel density as your albedo.

5 Likes

Hey @bgolus , thanks so much for the hint to use ddx and ddy. I have an ok grasp of transforms and 3D concepts but I’m new to shader programming and even newer to Unity, so this is exactly the sort of info I need.

In my game most objects are quads or cubes, and nearly every vertex has UV coordinates of (0,0), (0,1), (1,0) or (1,1), so issues with lighting accuracy where a texel center might be off the model are not a huge concern. If the lighting accuracy is a bit off in some corner cases, that’s ok for me. I’ll work around it anywhere it causes a problem. I think if I can get acceptable performance by snapping the world position to the nearest texel in the fragment shader, that’s exactly what I’ll do. It’s just so much easier than any of the alternatives.

I’m having some trouble getting the code you linked adapted for my purposes though. I’m not super clear on what space all the different variables should be in. Here’s what I’ve got right now:

half4 fragForwardBaseTexel (VertexOutputForwardBase i) : SV_Target
{
    // 1.) Snap fragment UVs to the center of the nearest texel
    float2 originalUV = i.tex.xy;
    float2 snapUV = floor(originalUV * _MainTex_TexelSize.zw)/_MainTex_TexelSize.zw + (_MainTex_TexelSize.xy/2.0);
    float2 shiftUV = (snapUV - originalUV);

    // 2.) Using screenspace derivatives, calculate a dUV to dWorld transform
    float3 originalWorldPos = IN_WORLDPOS(i);

    float3 dp1 = ddx( originalWorldPos );
    float3 dp2 = ddy( originalWorldPos ) * _ProjectionParams.x;
    float3 normal = normalize(cross(dp1, dp2));
    float2 duv1 = ddx( originalUV );
    float2 duv2 = ddy( originalUV ) * _ProjectionParams.x;
    // solve the linear system
    float3 dp2perp = cross( dp2, normal );
    float3 dp1perp = cross( normal, dp1 );
    float3 T = dp2perp * duv1.x + dp1perp * duv2.x;
    float3 B = dp2perp * duv1.y + dp1perp * duv2.y;

    // 3.) Transform the snapped UV back to world space
    float3 snappedWorldPos = originalWorldPos + shiftUV.x * T + shiftUV.y * B;

    UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);

    //FRAGMENT_SETUP(s)
    FragmentCommonData s = FragmentSetup(i.tex, i.eyeVec, IN_VIEWDIR4PARALLAX(i), i.tangentToWorldAndPackedData, snappedWorldPos);
    ...
}

This produces results virtually indistinguishable from the stock Standard shader. I was able to narrow that down to T and B essentially being zero vectors, so no matter what shiftUV is, there’s no change in the position, therefore no change in the lighting. I’ll keep at it and try to dissect your linked code and understand it better but another nudge in the right direction would be really appreciated.

To be a little more clear, my game is lit mostly by point and spot lights, and around these lights, I can see clear shading within pixels. Here I’ve turned off my post-process effect for the purposes of debugging, and taken a screenshot of a point light near a wall. Opening the image in photoshop and boosting its levels, I can confirm that the lighting code is still varying shading across texels.

I continue to dig through the unity standard shader lighting code line by line, looking for where the original fragment position, or any other non-snapped position, might be sneaking in and affecting calculations.

Hmm. Yeah. My expectations for those values are wrong. The T and B, once normalized, absolutely give you the correct direction, but not the correct magnitude. I seem to remember trying to tackle something like this in the past and failing then at this spot too.

This is still the way you’d need to do what you’re trying to do for it to work on more complex geometry, but you may be better off with one of the other methods.

I think I’ve gotten it now, by sitting down and writing out all the various coordinate transforms. Here’s my base pass fragment shader. The additive pass is near identical. Step 2c melted my brain for about 20 minutes but I’m ok now.

WARNING THIS CODE HAS BUGS IN IT. I LEAVE IT HERE ONLY FOR REFERENCE. SEE MY LATER POSTS IN THE THREAD FOR FIXED CODE.

half4 fragForwardBaseTexel (VertexOutputForwardBase i) : SV_Target
{
    // 1.) Calculate how much the texture UV coords need to
    //     shift to be at the center of the nearest texel.
    float2 originalUV = i.tex.xy;
    float2 centerUV = floor(originalUV * _MainTex_TexelSize.zw)/_MainTex_TexelSize.zw + (_MainTex_TexelSize.xy/2.0);
    float2 dUV = (centerUV - originalUV);

    // 2a.) Get this fragment's world position
    float3 originalWorldPos = IN_WORLDPOS(i);

    // 2b.) Calculate how much the texture coords vary over fragment space.
    //      This essentially defines a 2x2 matrix that gets
    //      texture space (UV) deltas from fragment space (ST) deltas
    // Note: I call fragment space "ST" to disambiguate from world space "XY".
    float2 dUVdS = ddx( originalUV );
    float2 dUVdT = ddy( originalUV );

    // 2c.) Invert the texture delta from fragment delta matrix
    float2x2 dSTdUV = float2x2(dUVdT[1], -dUVdS[1], -dUVdT[0], dUVdS[0])*(1/(dUVdS[0]*dUVdT[1]-dUVdS[1]*dUVdT[0]));

    // 2d.) Convert the texture delta to fragment delta
    float2 dST = mul(dSTdUV , dUV);

    // 2e.) Calculate how much the world coords vary over fragment space.
    float3 dXYZdS = ddx(originalWorldPos);
    float3 dXYZdT = ddy(originalWorldPos);

    // 2f.) Finally, convert our fragment space delta to a world space delta
    float3 dXYZ = dXYZdS * dST[0] + dXYZdT * dST[1];

    // 3.) Transform the snapped UV back to world space
    float3 snappedWorldPos = originalWorldPos + dXYZ;

    // 4.) Eye vec need to be changed?
    half3 eyeVec = i.eyeVec;

    // 5.) What about view dir for paralax?
    half3 viewDir = IN_VIEWDIR4PARALLAX(i);

    // 6.) Is tangentToWorldAndLightDir ok unchanged?
    half4 tangentToWorld[3] = i.tangentToWorldAndPackedData;

    UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);

    //FRAGMENT_SETUP(s)
    FragmentCommonData s = FragmentSetup(i.tex, eyeVec, viewDir, tangentToWorld, snappedWorldPos);

    UNITY_SETUP_INSTANCE_ID(i);
    UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i);

    UnityLight mainLight = MainLight();

    UNITY_LIGHT_ATTENUATION(atten, i, snappedWorldPos);

    half occlusion = Occlusion(i.tex.xy);
    UnityGI gi = FragmentGI (s, occlusion, i.ambientOrLightmapUV, atten, mainLight);

    half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.smoothness, s.normalWorld, -s.eyeVec, gi.light, gi.indirect);
    c.rgb += Emission(i.tex.xy);

    //UNITY_APPLY_FOG(i.fogCoord, c.rgb);
    return OutputForward (c, s.alpha);
}

There are still some more things to be done before this will ACTUALLY GENUINELY do texel-based shading. I might even need to ditch the standard shader entirely and roll my own, much simpler shader.

But now at least I have an accurate texel center world position in my fragment shader for feeding to the lighting calculations. I’ll keep this thread up to date on my progress. Thanks again @bgolus for you help.

2 Likes

Success!

3489504--277830--Screenflick-Movie-3.gif

3489504--277834--Screenflick-Movie-4.gif

I had a little trouble capturing pixel perfect video but it is pixel perfect in-game. Will post a followup with code.

4 Likes

Here’s the ForwardAdd pass of my altered standard shader. The base pass is near identical. Note: This does NOT do any posterization, it simply snaps all lighting calculations to texels, allowing later posterization effects to work their magic as seen above.

WARNING THIS CODE HAS BUGS IN IT. I LEAVE IT HERE ONLY FOR REFERENCE. SEE MY NEXT POST IN THE THREAD FOR FIXED CODE.

#pragma vertex vertAdd
#pragma fragment fragForwardAddTexel
#include "UnityStandardCoreForward.cginc"

uniform float4 _MainTex_TexelSize;

half4 fragForwardAddTexel (VertexOutputForwardAdd i) : SV_Target
{
    // 1.) Calculate how much the texture UV coords need to
    //     shift to be at the center of the nearest texel.
    float2 originalUV = i.tex.xy;
    float2 centerUV = floor(originalUV * _MainTex_TexelSize.zw)/_MainTex_TexelSize.zw + (_MainTex_TexelSize.xy/2.0);
    float2 dUV = (centerUV - originalUV);

    // 2a.) Get this fragment's world position
    float3 originalWorldPos = IN_WORLDPOS_FWDADD(i);

    // 2b.) Calculate how much the texture coords vary over fragment space.
    //      This essentially defines a 2x2 matrix that gets
    //      texture space (UV) deltas from fragment space (ST) deltas
    // Note: I call fragment space (S,T) to disambiguate.
    float2 dUVdS = ddx( originalUV );
    float2 dUVdT = ddy( originalUV );

    // 2c.) Invert the fragment from texture matrix
    float2x2 dSTdUV = float2x2(dUVdT[1], -dUVdS[1], -dUVdT[0], dUVdS[0])*(1/(dUVdS[0]*dUVdT[1]-dUVdS[1]*dUVdT[0]));

    // 2d.) Convert the UV delta to a fragment space delta
    float2 dST = mul(dSTdUV , dUV);

    // 2e.) Calculate how much the world coords vary over fragment space.
    float3 dXYZdS = ddx(originalWorldPos);
    float3 dXYZdT = ddy(originalWorldPos);

    // 2f.) Finally, convert our fragment space delta to a world space delta
    // And be sure to clamp it to SOMETHING in case the derivative calc went insane
    // Here I clamp it to -1 to 1 unit in unity, which should be orders of magnitude greater
    // than the size of any texel.
    float3 dXYZ = dXYZdS * dST[0] + dXYZdT * dST[1];
    dXYZ = clamp (dXYZ, -1, 1);

    // 3.) Transform the snapped UV back to world space
    float3 snappedWorldPos = originalWorldPos + dXYZ;

    // 4.) Altering the eyeVec seems not necessary but it is broken out here for debug
    half3 eyeVec = i.eyeVec;

    // 5.) Altering the viewDir seems not necessary but it is broken out here for debug
    half3 viewDir = IN_VIEWDIR4PARALLAX_FWDADD(i);

    // 6.) Altering the tangentToWorld seems not necessary but it is broken out here for debug
    half4 tangentToWorld[3] = i.tangentToWorldAndLightDir;

    // 7.) Altering the lightDir (global space) seems not necessary but it is broken out here for debug
    //     I tried correcting it and found no difference in render.
    //half3 lightDir = normalize(_WorldSpaceLightPos0.xyz - snappedWorldPos); // This seems to not be necessary?
    half3 lightDir = IN_LIGHTDIR_FWDADD(i);

    UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);

    //FRAGMENT_SETUP_FWDADD(s)
    FragmentCommonData s = FragmentSetup(i.tex, eyeVec, viewDir, tangentToWorld, snappedWorldPos);

    UNITY_LIGHT_ATTENUATION(atten, i, snappedWorldPos);

    UnityLight light = AdditiveLight (lightDir, atten);
    UnityIndirect noIndirect = ZeroIndirect ();

    // 8.) Throw all Unity's beautiful lights into the trash and treat everything
    //     basically like a point light with diffuse only.
    //     TBD: Claw back funstionality as-needed.
    //half4 c = UNITY_BRDF_PBS(s.diffColor, s.specColor, s.oneMinusReflectivity, s.smoothness, s.normalWorld, -s.eyeVec, light, noIndirect);
    half4 c = half4(s.diffColor * light.color, 1);

    UNITY_APPLY_FOG_COLOR(i.fogCoord, c.rgb, half4(0,0,0,0)); // fog towards black in additive pass
    return OutputForward (c, s.alpha);
}

And there you have it, moderately efficient texel-space shading with Unity’s standard shader! It should support shadows, dynamic lights, everything. You’ll need to do some work to get back functionality like material properties, light types other than points, light cookies, etc. But I leave that as an exercise for the reader. The fundamentals are here and it’s a big relief to me. Now to get back to my game!

4 Likes

I have continued developing this project on and off and found some issues with the shader posted above. It was not calculating an accurate eye angle, and it was incorrectly inverting the matrix in step 2c.

The effect works MUCH MUCH better with this fix to the shader and it’s no longer necessary to throw out unity’s stock lights. They work fine now, cookies and all! Shadows should even work but I have not checked.

#pragma vertex vertAdd
#pragma fragment fragAddTexel
#include "UnityStandardCoreForward.cginc"

uniform float4 _MainTex_TexelSize;

half4 fragAddTexel (VertexOutputForwardAdd i) : SV_Target
{
    // 1.) Calculate how much the texture UV coords need to
    //     shift to be at the center of the nearest texel.
    float2 originalUV = i.tex.xy;
    float2 centerUV = floor(originalUV * (_MainTex_TexelSize.zw))/_MainTex_TexelSize.zw + (_MainTex_TexelSize.xy/2.0);
    float2 dUV = (centerUV - originalUV);

    // 2a.) Get this fragment's world position
    float3 originalWorldPos = IN_WORLDPOS_FWDADD(i);

    // 2b.) Calculate how much the texture coords vary over fragment space.
    //      This essentially defines a 2x2 matrix that gets
    //      texture space (UV) deltas from fragment space (ST) deltas
    // Note: I call fragment space (S,T) to disambiguate.
    float2 dUVdS = ddx( originalUV );
    float2 dUVdT = ddy( originalUV );

    // 2c.) Invert the fragment from texture matrix
    float2x2 dSTdUV = float2x2(dUVdT[1], -dUVdT[0], -dUVdS[1], dUVdS[0])*(1.0f/(dUVdS[0]*dUVdT[1]-dUVdT[0]*dUVdS[1]));


    // 2d.) Convert the UV delta to a fragment space delta
    float2 dST = mul(dSTdUV , dUV);

    // 2e.) Calculate how much the world coords vary over fragment space.
    float3 dXYZdS = ddx(originalWorldPos);
    float3 dXYZdT = ddy(originalWorldPos);

    // 2f.) Finally, convert our fragment space delta to a world space delta
    // And be sure to clamp it to SOMETHING in case the derivative calc went insane
    // Here I clamp it to -1 to 1 unit in unity, which should be orders of magnitude greater
    // than the size of any texel.
    float3 dXYZ = dXYZdS * dST[0] + dXYZdT * dST[1];

    dXYZ = clamp (dXYZ, -1, 1);

    // 3.) Transform the snapped UV back to world space
    float3 snappedWorldPos = originalWorldPos + dXYZ;

    UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);

    // 4.) Insert the snapped position and corrected eye vec into the input structure
    i.posWorld = snappedWorldPos;
    i.eyeVec = NormalizePerVertexNormal(snappedWorldPos.xyz - _WorldSpaceCameraPos);

    // Calculate lightDir using the snapped psotion at texel center
    float3 lightDir = _WorldSpaceLightPos0.xyz - snappedWorldPos.xyz * _WorldSpaceLightPos0.w;
    #ifndef USING_DIRECTIONAL_LIGHT
        lightDir = NormalizePerVertexNormal(lightDir);
    #endif
    i.tangentToWorldAndLightDir[0].w = lightDir.x;
    i.tangentToWorldAndLightDir[1].w = lightDir.y;
    i.tangentToWorldAndLightDir[2].w = lightDir.z;

    //FRAGMENT_SETUP_FWDADD(s)
    FragmentCommonData s = FragmentSetup(i.tex, i.eyeVec, IN_VIEWDIR4PARALLAX_FWDADD(i), i.tangentToWorldAndLightDir, snappedWorldPos);

    UNITY_LIGHT_ATTENUATION(atten, i, s.posWorld)
    UnityLight light = AdditiveLight (IN_LIGHTDIR_FWDADD(i), atten);
    UnityIndirect noIndirect = ZeroIndirect ();

    // 4.) Call Unity's standard light calculation!
    half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.smoothness, s.normalWorld, -s.eyeVec, light, noIndirect);

    UNITY_APPLY_FOG_COLOR(i.fogCoord, c.rgb, half4(0,0,0,0)); // fog towards black in additive pass
    return OutputForward (c, s.alpha);
}
3 Likes

Wow, this looks great! Thank you for sharing the thought process and solution! It’s probably something I’ll never need, given how niche it is, but that’s how all the most interesting things work out! :stuck_out_tongue:

I need this so badly.
How do i actually get it to work?

Can i use it as a subgraph for unity shader graph editor somhow?

Hello again Unity Community! Development on my little project has continued over the past year, but I’ve run into another snag. Hopefully somebody can get me unstuck!

In a post above, I wrote “shadows should work but I have not checked” regarding my texel lighting shader. As you can see, shadows very much DO work, and fall along the pixel grid of textures. I am very proud of this charming effect. It’s fantastic and crunchy in motion.

However, in my project I’ve started to use Unity’s “terrain” module to create natural environments. It’s a big timesaver, but its multi-texture shader will need to be modified for per-texel lighting. See here, where the shadow cast across the wood is pixelated (using my shader) but the shadow across the grass, which is a terrain object, is not (using unity’s standard terrain shader).

I tried modifying the Unity terrain shader, but found it pretty difficult to understand. In particular, the “Standard-FirstPass” terrain shader seems to be a surface shader rather than a vertex / fragment shader.

It seems that this type of shader doesn’t expose the lighting and shadow calculations I would need to modify in order to get texel-based shading to work. This “surf” step seems to be designed for texture blending or other simple activities, while lighting, shadow, and other passes are automatically generated and therefore not editable.

Does someone more knowledgeable than I am have any suggestions for how I might create a terrain shader with the same lighting model as the standard shader, but also with the texel-shading modifications I created above? How should I go about this? Would I be better off converting the standard shader to have all the properties needed to render terrain, or would it be smarter to convert the standard terrain shader from surface to vert/frag and then modify it?

Are there any good resources for either of those options?

For those interested in this kind of lighting, but not willing to use old Unity version, I’ve made a URP-based 2019.4 version of this: GitHub - keeborgue/UnityTexelShaders: URP Texel Lighting

Kudos to GreatestBear

This is an interpretation of URP Lit shader, specular is also texelated.
Next on track: extending TerrainLit shader.
Not tested on translucent materials.

9 Likes

Hi GreatestBear!

Thank you so much for providing this resource, you seem to be the only one publically releasing their shader code for Unity. I’ve attempted to insert your fragment shader into a standard shader copied from the built-in shader repository but unfortunately all I get with this is vertex lighting and no shadows. Would you mind posting your full shader file or pointing me in the right direction as to what I’m doing wrong?

Shader "Celestial Static/Texel space lighting"
{
    Properties
    {
        _Color("Color", Color) = (1,1,1,1)
        _MainTex("Albedo", 2D) = "white" {}

        _Cutoff("Alpha Cutoff", Range(0.0, 1.0)) = 0.5

        _Glossiness("Smoothness", Range(0.0, 1.0)) = 0.5
        _GlossMapScale("Smoothness Scale", Range(0.0, 1.0)) = 1.0
        [Enum(Metallic Alpha,0,Albedo Alpha,1)] _SmoothnessTextureChannel ("Smoothness texture channel", Float) = 0

        [Gamma] _Metallic("Metallic", Range(0.0, 1.0)) = 0.0
        _MetallicGlossMap("Metallic", 2D) = "white" {}

        [ToggleOff] _SpecularHighlights("Specular Highlights", Float) = 1.0
        [ToggleOff] _GlossyReflections("Glossy Reflections", Float) = 1.0

        _BumpScale("Scale", Float) = 1.0
        [Normal] _BumpMap("Normal Map", 2D) = "bump" {}

        _Parallax ("Height Scale", Range (0.005, 0.08)) = 0.02
        _ParallaxMap ("Height Map", 2D) = "black" {}

        _OcclusionStrength("Strength", Range(0.0, 1.0)) = 1.0
        _OcclusionMap("Occlusion", 2D) = "white" {}

        _EmissionColor("Color", Color) = (0,0,0)
        _EmissionMap("Emission", 2D) = "white" {}

        _DetailMask("Detail Mask", 2D) = "white" {}

        _DetailAlbedoMap("Detail Albedo x2", 2D) = "grey" {}
        _DetailNormalMapScale("Scale", Float) = 1.0
        [Normal] _DetailNormalMap("Normal Map", 2D) = "bump" {}

        [Enum(UV0,0,UV1,1)] _UVSec ("UV Set for secondary textures", Float) = 0


        // Blending state
        [HideInInspector] _Mode ("__mode", Float) = 0.0
        [HideInInspector] _SrcBlend ("__src", Float) = 1.0
        [HideInInspector] _DstBlend ("__dst", Float) = 0.0
        [HideInInspector] _ZWrite ("__zw", Float) = 1.0
    }

    CGINCLUDE
        #define UNITY_SETUP_BRDF_INPUT MetallicSetup
    ENDCG

    SubShader
    {
        Tags { "RenderType"="Opaque" "PerformanceChecks"="False" }

        // ------------------------------------------------------------------
        //  Base forward pass (directional light, emission, lightmaps, ...)
        Pass
        {
            Name "FORWARD"
            Tags { "LightMode" = "ForwardBase" }

            Blend [_SrcBlend] [_DstBlend]
            ZWrite [_ZWrite]

            CGPROGRAM
            #pragma target 3.0

            // -------------------------------------

            #pragma shader_feature_local _NORMALMAP
            #pragma shader_feature_local _ _ALPHATEST_ON _ALPHABLEND_ON _ALPHAPREMULTIPLY_ON
            #pragma shader_feature _EMISSION
            #pragma shader_feature_local _METALLICGLOSSMAP
            #pragma shader_feature_local _DETAIL_MULX2
            #pragma shader_feature_local _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A
            #pragma shader_feature_local _SPECULARHIGHLIGHTS_OFF
            #pragma shader_feature_local _GLOSSYREFLECTIONS_OFF
            #pragma shader_feature_local _PARALLAXMAP

            #pragma multi_compile_fwdbase
            #pragma multi_compile_fog
            #pragma multi_compile_instancing
            // Uncomment the following line to enable dithering LOD crossfade. Note: there are more in the file to uncomment for other passes.
            //#pragma multi_compile _ LOD_FADE_CROSSFADE

            #pragma vertex vertAdd
            #pragma fragment fragAddTexel
            #include "UnityStandardCoreForward.cginc"
           
            uniform float4 _MainTex_TexelSize;
           
            half4 fragAddTexel (VertexOutputForwardAdd i) : SV_Target
            {
                // 1.) Calculate how much the texture UV coords need to
                //     shift to be at the center of the nearest texel.
                float2 originalUV = i.tex.xy;
                float2 centerUV = floor(originalUV * (_MainTex_TexelSize.zw))/_MainTex_TexelSize.zw + (_MainTex_TexelSize.xy/2.0);
                float2 dUV = (centerUV - originalUV);
            
                // 2a.) Get this fragment's world position
                float3 originalWorldPos = IN_WORLDPOS_FWDADD(i);
            
                // 2b.) Calculate how much the texture coords vary over fragment space.
                //      This essentially defines a 2x2 matrix that gets
                //      texture space (UV) deltas from fragment space (ST) deltas
                // Note: I call fragment space (S,T) to disambiguate.
                float2 dUVdS = ddx(originalUV);
                float2 dUVdT = ddy(originalUV);
            
                // 2c.) Invert the fragment from texture matrix
                float2x2 dSTdUV = float2x2(dUVdT[1], -dUVdT[0], -dUVdS[1], dUVdS[0])*(1.0f/(dUVdS[0]*dUVdT[1]-dUVdT[0]*dUVdS[1]));
            
                // 2d.) Convert the UV delta to a fragment space delta
                float2 dST = mul(dSTdUV , dUV);
            
                // 2e.) Calculate how much the world coords vary over fragment space.
                float3 dXYZdS = ddx(originalWorldPos);
                float3 dXYZdT = ddy(originalWorldPos);
            
                // 2f.) Finally, convert our fragment space delta to a world space delta
                // And be sure to clamp it to SOMETHING in case the derivative calc went insane
                // Here I clamp it to -1 to 1 unit in unity, which should be orders of magnitude greater
                // than the size of any texel.
                float3 dXYZ = dXYZdS * dST[0] + dXYZdT * dST[1];
            
                dXYZ = clamp (dXYZ, -1, 1);
            
                // 3.) Transform the snapped UV back to world space
                float3 snappedWorldPos = originalWorldPos + dXYZ;
            
                UNITY_APPLY_DITHER_CROSSFADE(i.pos.xy);
            
                // 4.) Insert the snapped position and corrected eye vec into the input structure
                i.posWorld = snappedWorldPos;
                half3 eyeVec = NormalizePerVertexNormal(snappedWorldPos.xyz - _WorldSpaceCameraPos);
            
                // Calculate lightDir using the snapped psotion at texel center
                float3 lightDir = _WorldSpaceLightPos0.xyz - snappedWorldPos.xyz * _WorldSpaceLightPos0.w;
                #ifndef USING_DIRECTIONAL_LIGHT
                    lightDir = NormalizePerVertexNormal(lightDir);
                #endif
                i.tangentToWorldAndLightDir[0].w = lightDir.x;
                i.tangentToWorldAndLightDir[1].w = lightDir.y;
                i.tangentToWorldAndLightDir[2].w = lightDir.z;
            
                //FRAGMENT_SETUP_FWDADD(s)
                FragmentCommonData s = FragmentSetup(i.tex, eyeVec, IN_VIEWDIR4PARALLAX_FWDADD(i), i.tangentToWorldAndLightDir, snappedWorldPos);
            
                UNITY_LIGHT_ATTENUATION(atten, i, s.posWorld)
                UnityLight light = AdditiveLight (IN_LIGHTDIR_FWDADD(i), atten);
                UnityIndirect noIndirect = ZeroIndirect ();
            
                // 4.) Call Unity's standard light calculation!
                half4 c = UNITY_BRDF_PBS (s.diffColor, s.specColor, s.oneMinusReflectivity, s.smoothness, s.normalWorld, -s.eyeVec, light, noIndirect);
            
                UNITY_APPLY_FOG_COLOR(i.fogCoord, c.rgb, half4(0,0,0,0)); // fog towards black in additive pass
                return OutputForward (c, s.alpha);
            }

            ENDCG
        }
    }

    CustomEditor "StandardShaderGUI"
}

Your shader only has a forward pass. For shadows, a shader needs the SHADOWCASTER pass which you’ll see in the standard shader you copied this pass from. The shadowcaster pass will have to be modified in the same ways this one was to get the texel based lighting.

When working with surface shaders, it generates all those passes for you normally.

I’m running into a weird issue: when I use this fragment code, everything just turns black, even with the code from Philip. Does it require any additional configuration into the project?
Using built-in rendering pipeline.

Anyone knows how to make it work? I’m a dummy on shaderworld…
There’s a URP available here:
3D Pixel Art Shadows (Texel Space Lighting Shader for URP) | VFX Shaders | Unity Asset Store
But I’m not so ready for URP right now…

Is is possible to recreate this in shadergraph?

It is possible, I’ve done it with a URP shader graph now. Since shadergraph and URP keep changing dramatically with every release there’s a good chance any example I post will be useless to anyone but a shader expert (and those guys don’t need an example, the code above is enough) but here goes anyway.

The primary trick is to pick a PBR output node (so lighting inputs are routed to the graph), but provide no albedo. Then follow the code above, lightly adapted to URP. Here’s how I did it.

First I created a subgraph does per texel lighting. Given the big breath World Position, UVs, Texel Size, Normal, Albedo, and Emissive map, it can give you a final lit, rendered fragment color. That looks like this:


From left to right the custom functions are…

void TexelSnap_float(float3 WorldPos, float4 UV0, float4 TexelSize, out float3 SnappedWorldPos)
{
    // 1.) Calculate how much the texture UV coords need to
    //     shift to be at the center of the nearest texel.
    float2 originalUV = UV0.xy;
    float2 centerUV = floor(originalUV * (TexelSize.zw))/TexelSize.zw + (TexelSize.xy/2.0);
    float2 dUV = (centerUV - originalUV);

    // 2b.) Calculate how much the texture coords vary over fragment space.
    //      This essentially defines a 2x2 matrix that gets
    //      texture space (UV) deltas from fragment space (ST) deltas
    // Note: I call fragment space "ST" to disambiguate from world space "XY".
    float2 dUVdS = ddx( originalUV );
    float2 dUVdT = ddy( originalUV );

    // 2c.) Invert the texture delta from fragment delta matrix
    float2x2 dSTdUV = float2x2(dUVdT[1], -dUVdT[0], -dUVdS[1], dUVdS[0])*(1.0f/(dUVdS[0]*dUVdT[1]-dUVdT[0]*dUVdS[1]));

    // 2d.) Convert the texture delta to fragment delta
    float2 dST = mul(dSTdUV , dUV);

    // 2e.) Calculate how much the world coords vary over fragment space.
    float3 dXYZdS = ddx(WorldPos);
    float3 dXYZdT = ddy(WorldPos);

    // 2f.) Finally, convert our fragment space delta to a world space delta
    // And be sure to clamp it in case the derivative calc went insane
    float3 dXYZ = dXYZdS * dST[0] + dXYZdT * dST[1];
    dXYZ = clamp (dXYZ, -1, 1);

    // 3a.) Transform the snapped UV back to world space
    SnappedWorldPos = (WorldPos + dXYZ);
}
void GetAmbient_float(out float3 Ambient)
{
   Ambient = half3(unity_SHAr.w, unity_SHAg.w, unity_SHAb.w);
}
void StandardPBR_float(float3 WorldPos, float3 Normal, float3 Ambient, float3 Albedo, out float3 Color)
{
    #if SHADERGRAPH_PREVIEW
       float3 Direction = half3(0.5, 0.5, 0);
       float3 LightColor = 1;
       float DistanceAtten = 1;
       float ShadowAtten = 1;
    #else
    #if SHADOWS_SCREEN
       float4 clipPos = TransformWorldToHClip(WorldPos);
       float4 shadowCoord = ComputeScreenPos(clipPos);
    #else
       float4 shadowCoord = TransformWorldToShadowCoord(WorldPos);
    #endif
       Light light = GetMainLight(shadowCoord);
       float3 Direction = light.direction;
       float3 LightColor = light.color;
       float DistanceAtten = light.distanceAttenuation;
       float ShadowAtten = light.shadowAttenuation;
    #endif

    Color = Albedo * saturate(LightColor * DistanceAtten) * ShadowAtten * saturate(dot(Normal, Direction));

    #ifndef SHADERGRAPH_PREVIEW
       int pixelLightCount = GetAdditionalLightsCount();
       for (int i = 0; i < pixelLightCount; ++i)
       {
           light = GetAdditionalLight(i, WorldPos);
           Direction = light.direction;
           LightColor = light.color;
           DistanceAtten = light.distanceAttenuation;
           ShadowAtten = light.shadowAttenuation;

           Color += Albedo * saturate(LightColor * DistanceAtten) * ShadowAtten * saturate(dot(Normal, Direction));
       }
    #endif
  
    Color += (Albedo * Ambient);
}

This is the most complex part by far. I may have, I legitimately cannot remember, monkeyed with the StandardPBR function to get it to behave more how I wanted compared to stock Unity lighting. I believe that I altered it to avoid blowouts from being overlit. This if good for my game, might be bad for yours. Sorry, I did not track what I changed as I wrote it.

Next we create a PBR Graph to host this. Mine looks like this:


I’ve named the Texture input _MainTex for legacy reasons primarily. You may or may not need to do this.

The custom function here is just getting the 4 element texel size, since unity didn’t do it correctly with the built in texelsize node in my version:

void GetTexelSize_float(out float4 TexelSize)
{
    TexelSize = _MainTex_TexelSize;
}

And that’s pretty much it. I did not hook up smoothness or metallic properties because I did not need them. Since I was hijacking the emissive input of PBR for this custom lighting, I did hook up a passthrough to allow an extra emissive texture input on the TexelPBR subgraph, but I am not using it here in this example.

I have other copies of this shader that are more featured and support normal maps, sprites with outlines, etc but these variants get quite complex and everything important is featured here. Good luck.

8 Likes