I’ve got this really simple fragment shader that projects a gradient across the side of my voxel mesh (which is generated at runtime).
The gradient continues all the way from the bottom to the top. How can I get the gradient to reset if the voxel is at a different z position?
e.g here is what I have
Shader code
but this is what I want
Here is my shader code
Pass {
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
// Define the vertex and fragment shader functions
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
// Access Shaderlab properties
uniform float4 _Color;
// Input into the vertex shader
struct vertexInput {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
// Output from vertex shader into fragment shader
struct vertexOutput {
float4 pos : SV_POSITION;
float4 worldPos : TEXCOORD0;
float3 normal : TEXCOORD1; //you don't need these semantics except for XBox360
float3 viewT : TEXCOORD2; //you don't need these semantics except for XBox360
};
// VERTEX SHADER
vertexOutput vert(vertexInput input) {
vertexOutput output;
output.pos = UnityObjectToClipPos(input.vertex);
output.worldPos = mul(unity_ObjectToWorld, input.vertex);
output.normal = normalize(input.normal);
return output;
}
// FRAGMENT SHADER
float4 frag(vertexOutput input) : COLOR {
return _Color * input.worldPos.y * input.normal.x;
}
ENDCG
}
}
There’s not enough information available in the shader to know whether it’s a separate block or not. You’ll probably have to generate UV co-ordinates for the voxel mesh faces and use V as the gradient height instead of the worldPos.y co-ordinate in the shader.
The second example here shows visualising the UV co-ordinates from the mesh in a shader. You could then use i.uv.y as the height. I can’t help with generating suitable UVs though, that’ll be very specific to your voxel mesh generator. It’ll need to scale the UVs of each face (of continuous voxel faces) to the correct range for the gradient texture.
No, because at no point does the shader have knowledge of that other vertex.
A vertex shader only knows about itself and nothing else.
A fragment shader only knows about the interpolated values it receives and nothing about the individual vertices that produced it.*
Going one step further, a geometry shader only knows of the 3 vertices that make up a single triangle.*
The only way to do this is to encode the necessary information into the mesh when generated via c#. In other words have the gradient be based off of manually calculated UVs and not purely world position.
Technically on AMD GPUs the fragment shader does the interpolation and has knowledge of all 3 vertices for each triangle, but this isn’t readily exposed to Unity’s shaders.
Adjacency data could give more information beyond that, but it wouldn’t be enough to act on for this situation, and Unity doesn’t support adjacency data in geometry shaders.
As @bgolus said, no. There’s another problem with that approach even if it was possible.
In your example if you flipped the middle top section over so the small step was resting on the bottom block and the big one next to it, then that approach would shade the smaller step wrong. The small step wouldn’t continue the face below, but would be part of the face one block right and one down.
The UV generation would need to allow for all three quads that make up that face so the narrow bottom bit would be red and the wider top bit white. Otherwise you end up with two different gradients on what should be one continuous face.
With faces that could span many blocks to the sides, and any of those could continue up or down, there’s a lot of variations to allow for in what counts as a face for the gradient.
If you look at this shader Shader - Shadertoy BETA how is it able to calculate the gradients like that. Is the data baked into the mesh? It seems to all be calculated in the shader.
The mesh is generated in the shader as well, so it can do the math for the extra faces to see if AO is needed or not. This is why there are 8 getVoxel calls in voxelAO - this only works because there isn’t a mesh, just some math to make one built in to the shader.
Following the origins of the shadertoy there is an explanation of the fake occlusion used by that shader. It still needs uv maps (per voxel face) and needs to know whether neighbour voxels are occupied or not, so you’d still need to modify the mesh generation to provide that information to a shader based on this. It avoids the multiple voxel face issues by each gradient only covering one voxel block.
To build on @DominoM 's response it really is important to understand there is no mesh being rendered on that page apart from a single quad used to render the fragment shader to the screen. That shader and all Shadertoys that appear to be using meshes are raytracing volumetric data stored & calculated within the shader. Because of this they effectively have access to the entire scene’s “geometry” at every pixel and can really do things like test against the face below it.