system
1
Ladies, gentlemen.
My current project’s art style includes the usage of the Unmoving Plaid Effect, and I’m currently puzzling how to go about doing that for a 3D model.
The way I have it set up thus far is like so:
A plane with the desired pattern is placed a decent distance away from the Main Camera, and parented to a secondary camera that renders only the layer it’s on (Pattern layer). Said secondary camera is then parented to the Main Camera, which is set to render all BUT the Pattern layer.
What I’m thinking I can do from here is puzzle out some sort of shader, mask, or something that uses a texture’s alpha to cut ‘holes’ into the Main Camera’s render layer, revealing the pattern beneath only in certain areas.
The only issue I can foresee with this method are difficulties involving the number of patterns on screen at once. Perhaps each object that has the effect has a variable or something connecting it with its pattern layer? I’m not quite sure.
Anyway, does this seem like a sound method to you? Or should I try something else? Is there any sort of shader that works with 3.4 that can cut ‘holes’ into a layer?
Peter_G
2
I’m not sure I fully understand the idea, but I’m pretty sure I get it. I would map the texture based on the object’s screen position. This should be much easier than trying to cut holes in a layer. Following shader taken shamelessly from this docs page.
Shader "Example/ScreenPos" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_Overlay ("Masked Texture", 2D) = "gray" {}
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
float4 screenPos;
};
sampler2D _MainTex;
sampler2D _Detail;
void surf ( Input IN , inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
float2 screenUV = IN.screenPos.xy / IN.screenPos.w;
screenUV *= float2(8,6);
o.Albedo *= tex2D (_Overlay, screenUV).rgb * 2;
}
ENDCG
}
Fallback "Diffuse"
}
A shader like this should do almost exactly what you want. Its basically a flat projection of the texture onto the image from the camera’s perspective. The texturing is done, based on where the object is on the screen. It has nothing to do with where the object is or how its uv mapped.