Hey, I’ve been experimenting with this type of grass rendering:
hope that paint gives the idea across. Basically I’m trying to do a fur kind of rendering without the stacked alpha planes. The top down texture stores height information about the grass, and every iteration we march towards the final depth and check if that pos is “under” the grass. I’ve been able to get some results with this dirty piece of code:
float4 g1 = tex2D(_GrassMap, uv);
float steps = 256;
float3 forward = UNITY_MATRIX_V[2].xyz;
float3 fwd = forward * far / steps;
float3 pos = g1.xyz;
//return pos.y;
float final = 0;
float diff = 0;
bool hit2 = false;
for(int j = 0; j < steps; j++)
{
float4 ip = mul(_MirrorVP, float4(pos,1));
float4 gs = tex2D(_GrassMapMirror, ComputeScreenPos(ip));
float4 gpos = float4(gs.xyz,0);
gpos.y += gs.a * 12.0 * ry;
if(gpos.y <= pos.y && hit2 == false)
{
final = j;
diff = abs(gpos.y - pos.y);
hit2 = true;
}
pos -= fwd;
}
return smoothstep(0,steps,final);
Basically I have the “normal” camera that is grassmap, and then the grassmapmirror which is a topdown view, then I march through the scene and lookup the topdown texture. Both of the textures are “world pos textures”, that is RGB contains the world xyz. The code here is going backwards, I’ve been experimenting with all directions in order to get something to show on screen, but clearly there is something I’m not understanding as the perspective is warped. I suspect it has something to do with composing the ray in a wrong way.
There is probably something I’m missing about the orthographic view perspective, or the rendering idea itself doesnt offer sufficient information. Every piece of help is appreciated.
PS. This code is running as a post processing shader as my old screenspace grass method was done that way. I realize this might be wiser to do inside a “regular mesh shader”.