Lighting information for a different UV

I’m trying to get lighting information to line up with pixels (large chunky pixels.)

My initial intuition is to use clamping so that multiple pixels all read lighting information for the same UV.

I know lighting isn’t actually in a map, so I trying to figure out if there was a way I could create a lighting buffer in the vertex shader. Or maybe I can write my own interpolation method for the lighting info that gets passed from vertex to fragment? I’m a bit out of my depth here.

Example

Any help on where to look here would be fantastic. I’m honestly not even sure what words I should be using in my search.

This is definitely an interesting question, as to answer it involves breaking a lot of known traditions (and the assumptions that come with them) of rendering. Initially, I took this to be per vertex, because per texel is a lot more complex. Nonetheless, my first thought was to create a custom vertex-lit shader, forcing no interpolation on the lighting data.

That was simple enough. The real challenge was getting shadows to work - which involved some re-writing of internal code to allow for shadow sampling in the vertex shader. However, the main issue with this approach is that Unity uses screen-space directional shadows which, as much as it causes so much pain to so many users of this site, does work; so long as you are sampling at full-resolution pixel scale. We aren’t, so instead you end up with different object’s lighting and shadows interfering with each other because there’s no way to tell what’s currently being sampled. Here’s the shader if you want to see how that works;

Shader "Custom/PixelatedLighting"
{
	Properties
	{
		_MainTex ("Texture", 2D) = "white" {}
	}

	CGINCLUDE

	#include "UnityCG.cginc"
	#include "Lighting.cginc"

	#include "HLSLSupport.cginc"
	#include "UnityShadowLibrary.cginc"

    #if defined(UNITY_STEREO_INSTANCING_ENABLED) || defined(UNITY_STEREO_MULTIVIEW_ENABLED)
		#define UNITY_SAMPLE_SCREEN_SHADOW(tex, uv) UNITY_SAMPLE_TEX2DARRAY_LOD( tex, float3((uv).x/(uv).w, (uv).y/(uv).w, (float)unity_StereoEyeIndex), 0 ).r
	#else
		#define UNITY_SAMPLE_SCREEN_SHADOW(tex, uv) tex2Dlod( tex, float4 (uv.xy / uv.w, 0, 0) ).r
	#endif

	#include "AutoLight.cginc"
	
	struct appdata
	{
		float4 vertex : POSITION;
		float2 uv : TEXCOORD0;
		float3 normal : NORMAL;
	};

	struct v2f
	{
		float4 pos : SV_POSITION;
		nointerpolation float3 lighting : TEXCOORD0;
		SHADOW_COORDS(1)
	};

	v2f vert (appdata v)
	{
		v2f o;
		o.pos = UnityObjectToClipPos (v.vertex);

		float3 worldPos = mul (unity_ObjectToWorld, float4 (v.vertex.xyz, 1)).xyz;
		float3 worldNorm = UnityObjectToWorldNormal (v.normal);

		TRANSFER_SHADOW(o);
		fixed shadow = SHADOW_ATTENUATION (o);

		half diff = smoothstep (0, 0.5, dot (worldNorm, _WorldSpaceLightPos0.xyz));
		o.lighting = _LightColor0.rgb * diff * shadow + ShadeSH9 (half4 (worldNorm, 1));

		return o;
	}

	half4 frag (v2f i) : SV_Target
	{
		return half4 (i.lighting, 1);
	}

	ENDCG

	SubShader
	{
		Tags { "RenderType"="Opaque" }
		LOD 100

		Pass
		{
			Tags { "LightMode"="ForwardBase" }

			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#pragma multi_compile_fwdbase
			ENDCG
		}

		UsePass "Legacy Shaders/VertexLit/ShadowCaster"
	}
}

As far as lighting per texel, the only way I can think to do it with realtime lighting would be to bake the surface data (normals, position, etc) into non-interpolated textures and use those to calculate lighting instead of the actual vertex data. There are other ways to get pixelated lighting, like voxelizing the coordinates in the fragment shader before calculating lighting, but there isn’t really any correlation between lighting and texture UVs, making relating the two incredibly complex. The only real way to do it would be to actually render the lighting directly into a texture in one pass, then sample it in another. Hope that gives you some ideas.