Official thread for the Ground Shader (card here) and for the material detector (card).
It needs to be a toon shader with the ability to mix in different textures from up to 4 different ground types (dirt, sand, grass and cobblestone, potentially others) by painting vertex colour.
Depending on the convenience, might be good to have it as a shader “add-on” or, since it doesn’t need a lot of the usual shader functionality, a completely new toon shader (reusing only the inmost Subgraphs for toon light calculations).
Last night I was looking into how to add support for vertex colors for shader graph (or, if I already have vertex color support, to eliminate the warning message in the polybrush window) but I haven’t yet found an answer.
I use the ToonShading subgraph because I receive this error when I use the internal lighting subgraphs:
To avoid this error I made the ground shader a shader add-on to use with the ToonShading subgraph. During my process I decided to move the outline subgraph out of the main ToonShading subgraph as the ground shader is not outlined. I made this change in all the shader variants using the outline.
The ground shader would use the combined diffuse lighting from the ToonLightingModel and AdditionalLightsToon subgraphs but ignore specular light. Ideally I can use the internal subgraphs and avoid unused properties such as Specular Map or Specular Color.
Looks pretty good! I like the idea of using noise to blend them, so it’s not just a fade. We could overlap another noise too, of a different scale to make the noise texture less readable.
I also think that, in the final terrains, we should make it so there are vertices where we want to blend. This way it wouldn’t be just a grid of quads as here, which gives away the fact that the textures are blended using vertex colours (it creates those hexagon shaped patches).
I think it looks really good, but there’s a weird ghosting effect, almost like a chromatic aberration, along the contours…?
Hehe, of course, the more geometry you have the better it looks. But maybe what we will do in the final geo is not to subdivide everything, but only cut where we need some material jumps, providing the extra edges for the fade.
I’ll look into that chromatic aberration. Right now I’m merging the current project into my fork and then I can make a pull request. I can also work on the ground debug shader.
Any idea why using the following graph setup returns this error? Parse error: syntax error, unexpected $end, expecting TOK_SHADER
Edit: Since one of the requirements of the debug shader was to enable the visualization of individual channels, I could replace this boolean keyword with the W channel of a Vector 4 property I would use as a channel mixer.
I’d prefer to display this in the inspector as checkboxes (as the codecks card recommends) and I could look into some custom shader GUI to achieve that. Even though the debug feature will be editor-only I’d like to avoid unnecessary booleans. Of course I could also display this as one Boolean with three vector 1 sliders.
@cirocontinisio I’ve removed the ghosting effect that was happening along the contours. I was able to ask Joyce if she had any ideas what might be the cause and she noticed that I was using thevector 4 output from the triplanar’s noise texture. Adding a split node to use only one channel fixed this.
I can put together a pull request but I had thought about making a custom shader GUI to better communicate how to use the vertex color debug function.
Looks great. I’d say, go ahead with the PR, and then you can add the Inspector later on.
As far as I know, Booleans in ShaderGraph will be converted to multiplications by 1 or 0 if possible. You should be able to verify this by inspecting the generated code. Basically instead of being a real branch, the values of each branch would be multiplied by 1 or 0 depending on the value of the boolean, and added together. This would simulate a switch as if it was an IF condition, without it being so (which as you know, is not well handled in shaders).
So,
As the card suggests, I was thinking of adding a script for the player to make a raycast to the ground, get the mesh color from the collider and return a value according to a dictionary matching the colors with the material,
also, we have the case of rocks and other materials that don’t depend on the color of the mesh, so for these ones we would be returning the name, the tag, or maybe the material itself could be mapped into the dictionary too
and for the “run every x frames” requirement I was wondering if an if in the update function with a serialize variable would be enough or if we need to implement something using InvokeRepeating or something like that
well that my initial thoughts for this task, let me know what you all think of this.
Hey! I found this post, I didn’t test it or anything and I’m not sure if we can use it because it’s using the Terrain component and we aren’t, but perhaps you’ll find it useful. The idea seems pretty clever, by getting the alpha maps at a specific location, it’s then using those values to blend the footstep sounds together according to the textures. I really don’t know much about this topic so I have no idea if this is something we can do or if it’s specific to the Terrain component, just throwing it out there.
I think we should use the same animation event we already have to play the footstep sounds and put the ground detection logic there, that way every footstep will have the correct sound, what do you think?
Terrain is entirely different. The alphamaps are basically a 3 dimensional array, one dimension for the different terrain textures and two dimensions for the position (without height). It means you can calculate the index in the array from the player position.
In our case the terrain is a mesh and the texture information is in the vertex colors. Conveniently if we use Raycast on it’s mesh collider, the raycast hit also returns the hit triangle index, which is an array of integers in which the indices of the vertices come in groups of 3.
Essentially you would get the 3 vertex colors this way:
Color vertexColor1 = mesh.colors[mesh.triangles[hit.triangleIndex * 3 + 0]];
Color vertexColor2 = mesh.colors[mesh.triangles[hit.triangleIndex * 3 + 1]];
Color vertexColor3 = mesh.colors[mesh.triangles[hit.triangleIndex * 3 + 2]];
Typically you would define local variables to store mesh.colors and mesh.triangles, yet I don’t know whether that is efficient concerning memory usage.
I didn’t know about that property in the raycast hit, that is indeed convenient. I don’t think memory would be an issue there, we only need to store the info of the mesh we’re currently standing on.
ok as an update on this task @Smurjo i tried your idea
Color vertexColor1 = mesh.colors[mesh.triangles[hit.triangleIndex * 3 + 0]];
Color vertexColor2 = mesh.colors[mesh.triangles[hit.triangleIndex * 3 + 1]];
Color vertexColor3 = mesh.colors[mesh.triangles[hit.triangleIndex * 3 + 2]];
but I’m getting an error
Not allowed to access colors on mesh ‘Combined Mesh (root: scene) 6 Instance’ (isReadable is false; Read/Write must be enabled in import settings)
UnityEngine.Mesh:get_colors()
I think this is because the mesh is static and we can not get the values that way,
I’m trying to found a workaround for this, if anyone knows how I could do this I would appreciate the help
That’s indeed a show-stopper. I can’t see how we can use vertex colors at all if we can’t read them.
We would have to keep a copy of the mesh which we can read. But having closer look at the card it also asks for consideration of rocks as well. I think we might be better off with a 2D texture especially made for the purpose. Here we simply calculate the index from the player position (without height - meaning the same footstep sounds on top of the arch as under the arch of the glade). The texture wouldn’t have to have a very high resolution - I guess one pixel per 0.5 m should do. Thinking about it, a “sound texture” is a very interesting idea indeed - we could also code ambient sounds like birds singing, fat sizzling or soup bubbling in it.
Hmm, interesting issue. Is that coming from the rocks, or from the ground? If it’s the ground, we can maybe avoid making it static. Or maybe there’s some other trick. For the rocks it’s even easier, and we might end up not batching them but just using an instanced shader.
But as @Smurjo mentioned, making it read/write means you are maintaining two copies of the mesh in memory, which is not the best for performance unless we absolutely need it.
There’s a few issues with that, this way it means that you lose the tridimensionality of the sounds, you basically have them on a flat 2D plane, but you don’t know how high they are. So if you climb on rocks you will still hear the sound of something 10 meters below as if it was immediately next to you.
The issue is coming from all of them, I tried to remove the static and it works for the ground, but palms and rock are still unreadable,
though I think those don’t matter since they don’t have a color mesh so for those I was thinking in use their tag to identify if it is a palm, rock, bush, etc, and another one for the ground, so if it’s ground we run the color logic to get the right material else we return the tag
It’s an exception though that the player can be on two very different heights less than 0.5 m apart in the same scene with different sounds. Mostly we use high rocks to separate scenes or the height doesn’t matter if we e.g. hear birds twitter in the forest. Neither would it matter if the footstep sound in the arch and on top of the arch (not many would go there) were the same. I doubt it is much of restriction if you can’t put sizzling fat (or whatever is making sound on a specific spot) within 0.5 m of a rock, being aware of the systems limitations.