Why are there stretch marks using my triplanar shader?

And I found this triplanar shader here: “http://www.martinpalko.com/triplanar-mapping/

But wanted it to work on a terraformed sphere (which I have already, generated procedurally, along with the correct normals and tangents). I just found an old post here I thought could help: How to apply a Tri-planar shader to a planet? - Questions & Answers - Unity Discussions

End goal: Have the cliff texture apply to the sides (normal towards x & z), and blend the grass along the top (normal close to y).

However, in combining the two I get texture stretching in SOME areas. I put the code above at the top of the “surf” function of the triplanar shader. I converted it to work in model space, and it seems ok only on some sides, but the transition has bad stretching in some spots - perhaps the shader is ok and my normals are bad? I tested my normals and tangents before and they seem just fine (lighting works well). Here is my combined shader:

Shader "Custom/Triplanar/Planet" 
{
	Properties 
	{
		_DiffuseMapX("Diffuse Map X", 2D) = "white" {}
		_DiffuseMapY("Diffuse Map Y", 2D) = "white" {}
		_DiffuseMapZ ("Diffuse Map Z", 2D)  = "white" {}
		_TextureScale("Texture Scale", float) = 1
		_TriplanarBlendSharpness("Blend Sharpness", float) = 1
	}
	SubShader 
	{
		Tags { "RenderType"="Opaque" }
		LOD 200

		CGPROGRAM
		#pragma target 3.0
		#pragma surface surf Lambert
		#pragma vertex vert

		sampler2D _DiffuseMapX;
		sampler2D _DiffuseMapY;
		sampler2D _DiffuseMapZ;
		float _TextureScale;
		float _TriplanarBlendSharpness;

		struct Input
		{
			float3 worldPos;
			float3 worldNormal;
			float3 vertex;
			float3 normal;
		}; 

		void vert(inout appdata_full v, out Input o) {
			UNITY_INITIALIZE_OUTPUT(Input, o);
			o.vertex = v.vertex;
			o.normal = v.normal;
		}

		void surf (Input IN, inout SurfaceOutput o)
		{
			float3 up = normalize(IN.vertex);
			float3 right = normalize(cross(up, float3(0, 1, 0)));
			float3 forward = normalize(cross(right, up));

			float3 localNormal = float3(dot(IN.normal, right), dot(IN.normal, up), dot(IN.normal, forward));

			// Find our UVs for each axis based on world position of the fragment.
			half2 yUV = IN.vertex.xz / _TextureScale;
			half2 xUV = IN.vertex.zy / _TextureScale;
			half2 zUV = IN.vertex.xy / _TextureScale;
			// Now do texture samples from our diffuse map with each of the 3 UV set's we've just made.
			half3 yDiff = tex2D (_DiffuseMapY, yUV);
			half3 xDiff = tex2D (_DiffuseMapX, xUV);
			half3 zDiff = tex2D (_DiffuseMapZ, zUV);
			// Get the absolute value of the world normal.
			// Put the blend weights to the power of BlendSharpness, the higher the value, 
            // the sharper the transition between the planar maps will be.
			half3 blendWeights = pow (abs(localNormal), _TriplanarBlendSharpness);
			// Divide our blend mask by the sum of it's components, this will make x+y+z=1
			blendWeights = blendWeights / (blendWeights.x + blendWeights.y + blendWeights.z);
			// Finally, blend together all three samples based on the blend mask.
			o.Albedo = xDiff * blendWeights.x + yDiff * blendWeights.y + zDiff * blendWeights.z;
		}
		ENDCG
	}
}

This is for a Unity game I’m building here: http://martianworlds.com

Edit: I think my issue was the coordinate planes used for UV were not rotated with the normal, but that would always result in a UV of 0,0 on the X,Z plane (Y axis). Turns out plan B is working the best, with very little FPS drop, so not bad. I’ll work in this direction. For those who wish to know, the solution for me was to use work in local object space, using the triplanar routines for a single texture. I did this for two textures and blended between them using the dot product of the surface normal and the normalized local vertex point (the “up” vector from the planet object’s center to the surface).

Because you do not base your blending on the actual surface normal like the original shader but you use a rotated local space. However this doesn’t make much sense because you have to pick the right texture based on the actual orientation. In your example you have a very steep hill / cliff so if you look at it from the top view There is very little difference in the world space projection and therefore the corresponding texture lookup will sample just a few pixels across the whole surface. That is usually not a problem because the worldspace normal is used to select the right projection. That means the stretched portion would actually get a blend factor close to 0 so it wouldn’t be visible at all. Though since you use a normal that is local to some pretty arbitrary local tangent space the x, y and z values of the normal do no longer correspond to the actual surface angle in relation to your 3 projections.

What’s exactly the reason to calculate that strange tangent space? You only seem to use it for the triplanar mapping and there it’s just wrong.

If you want something like what IndieLegion roughly explained in his post, the way he phrased it, it doesn’t make too much sense. One way you could achieve a 2 texture approach would be to do the “unaltered” triplanar mapping using the worldspace normal. Do this for both textures, the inner and the outer one. Now you could just blend between the surface and the core texture based on the height of the fragment. Though keep in mind that when using a single threshold for the blending between core and surface texture you get the blending seam at the same height everywhere on the planet. One could use a blend texture which acts like a height map to apply different blending thresholds to various loations on the planet.

Of course you could also do the triplanar mapping for 3 textures but it quickly gets out of hands. Keep in mind that each triplanar lookup requires 3 texture lookups

As far as I understand it, you picked two online sources, merged them and now you don’t know what you ended up with? Maybe these sites help you to get to understand the shader you created:

http://developer.download.nvidia.com/CgTutorial/cg_tutorial_chapter01.html