How to blend Texture2D colors using shader graph

For clarification, I started using shader graphs very recently.
Essentially, what I am trying to accomplish is blending my Texture2D colors together so the transition seems smooth.
Some background info. I am procedurally generating a color map. I have a plane which has a material I am editing through script. My Texture2D has “Material” and “Source” set to the material I use on the plane, So I can reference my Texture2D from the shader graph. For the mesh (which I am generating the heightmap), I have a material which uses a shader graph, which I am currently trying to understand. For the script I am generating the map, I have my regions and height set to: Sand, 0 | Grass, 0.7 | Snow, 1
Currently, my material on the mesh that has the shader graph looks like this:


For some reason, the output (as you can see on the Sample Texture 2D) is not smooth at all, and currently looks like this:

As you can see, for some reason, the snow is between sand and grass (colors are also opposite).
Without splitting the G value on the shader graph and inputting it as the Sample Texture 2D’s UV, my mesh looks like this (which is just the Texture2D ontop of the mesh).

If I only take a position node and split it and take it’s G value and input it as the base color, I can see some sort of height map.

How can I accomplish blending the colors together on the Texture2D using a shader graph?
Again, I am a complete beginner to shader graphs. If there are any further questions, ask and I shall answer.

I don’t see anything wrong with the Shadergraph, what does the texture look like and its settings?
If the texture has a vertical gradient with smooth transitions between the three colors, a sufficiently high resolution and no point filtering, I don’t see what would be wrong.
If on the other hand the texture only has the three colors with no interpolated color values in between, you’ll only get a tiny bit of “smoothing” when using bilinear (or trilinear) filtering and the texture is sampled just at the border between two colors.

Keep in mind that my material is procedurally generated. My Texture2D then inherits that material, which then goes into the shader graph.
Here is my Texture2D.

I’ve tried changing almost all of the settings on the Texture, nothing seems to fix the issue.
Here, on the right side is a simple Unlit/Texture which inherits the procedurally generated colors. I then set that as my Texture2D’s Material. On the shader graph, I then create a material (which you can see in the middle) and place it on the mesh.
Screenshot 2024-10-14 174413
Here are the settings on my Texture2D node in the shader graph:


Further help would be much appreciated.

I’m confused about what you’re trying to do (and how).

Your shader graph uses the object space y coordinate of the current fragment as UV coordinates (Shader Graph converts that single y coordinate value y to a Vector2(y, y)), thus you’re only sampling texture pixels along the diagonal from bottom left (y = 0) to top right (y = 1).
Judging from the small texture preview, the (Render)Texture only has the three different color values (beige, green, white) with no interpolated colors in between, so I’m confused how you expect interpolated colors to appear when you’re directly feeding the sampled texture colors to the color output.

To get smooth color changes, you need to either change the texture to have smooth color transitions or

EDIT: …if you tick “Enable Mip Maps” in the texture settings and set Filter Mode to Bilinear, you can let Unity create a mip chain (scaled down versions of the texture at 1/2, 1/4, 1/8, etc. resolutions) and explicitly sample a scaled down version of your texture, which essentially does what I describe how to do manually below for you automatically. To do this, replace the Sample Texture 2D node with a Sample Texture 2D LOD node and increase the LOD value for a blurrier output.

…or calculate smooth color changes manually in the shader graph. I can only think of rather expensive ways to do this (sampling the texture multiple times above and below the current UV coordinates and mixing all those resulting colors together - this is what a one-dimensional blur filter does).

If all you want is smooth transitions between the three colors based on the height, I would definitely recommend using a texture that is only 1 pixel wide, e.g. 512x1 and precompute a smooth gradient of the three colors (if that’s only done once during procedural generation you can use a shader(graph) that does many texture lookups as I described above), then sample that texture as you do now.

Thanks for taking the time to reply. Using the mip map solution did work on blending the colors, but the heightmap was still messed up. I noticed you said that I am converting a single y coordinate value to a Vector2, and the value only goes for 0 to 1. When I set the height of my mesh to 1 or below, I do not see any sand ontop of the grass, as it should be, but when I set the heightmap to over 1, the heightmap gets weird.


The results I am looking for look a lot like this image (notice the blending of colors, here he also used noise to blend the colors somehow, but I won’t be trying to achieve that now).


Not regarding the scale, these two images are pretty similar, but the pixels on my mesh are very noticeable. I am really just looking for a way to blend the pixels together.
If there is a simpler to achieve this, please let me know, shader graphs are not my thing.
Again, thank you for your time.

Ah, I believe I’m starting to understand a bit more / see what I misunderstood so far.
If I understand correctly you don’t want the colors to blend smoothly but rather the currently blocky edges between colors to be smooth instead.

I’m actually not sure how to solve that. I’d say having the Filter Mode set to Point doesn’t help, but that only produces tiny pixelation, not those big blocks.
I replicated the shader graph and created a Custom Render Texture with the settings from the screenshots and set the source to a texture containing 4 colors stacked vertically. I then get colors distributed by height like in your last picture, but way smoother edges between the colors (only the aforementioned slight pixelation due to Point Filtering):

So I guess there must be something weird with the contents of your texture.
Can you try exporting it to a .png and post that here? There should be an Export menu item in the three dots menu of the Custom Render Texture:
image

Sidenote:
Notice I did not say the y coordinate values used as UV coordinates are only between 0 and 1, just that the positions of the pixels sampled in the texture go from bottom left to top right when y goes from 0 to 1.
To actually get the y values to go from 0 to 1 you’d have to divide the object space y by the max mesh height in the shader graph, depending on how the mesh is set up you might be able to use the Object node for that.

What still puzzles me a bit is the way your texture looks and the way you sample it:
it looks like a top-down view of the terrain (thus I would expect the UVs to be based on the XZ positions of the fragments), but the UVs you use are based solely on the height. For that approach, the texture wouldn’t need to be square and could just be a 1px vertical strip as I said before. I don’t get how using (0, y) or (y, y) as UVs makes sense with how the texture looks.

I exported my Texture2D into an image:


The only difference I can spot is that you were using a set texture, whereas I am procedurally generating a noise map, to a color map into the material which the Texture2D uses.

Realization: I think I get it, my Texture2D has to be a gradient doesn’t it?
I recreated your little project and got this result -


using this texture.

We’re simply taking the height of the texture aren’t we? That’s what is causing this weird shape.
Question is, how can we turn my Texture2D map into a height gradient?

Well, if the borders between colors in the texture are already blocky then no wonder the end result is blocky as well.
Again if all you want to do is set the color solely based on the height then that texture stores way more data than necessary. Sampling that texture using (y, y) as UVs also explains why you get the sand color at both the bottom and top of the mesh.

Regarding the gradient and coming back to this part of the first post:

I have my regions and height set to: Sand, 0 | Grass, 0.7 | Snow, 1

…can’t you just create a normal Texture2D in C#, say resolution 512x1 (though a lot less should still be enough, e.g. 128x1), iterate over the y coordinate and set the color at pixel (0, y) based on the region ranges (e.g. to “sand” if y <= 0.7f * texture.height)? (That is assuming the texture doesn’t change after the initial creation).
There is also the Sample Gradient node in Shader Graph, but that’s more expensive than sampling a texture, though you could then just pass in a Gradient from C# to the graph. Setting the Gradient to Fixed would give the same visual result as sampling a texture with color stripes using point filtering.