Is there a way to transform the position from local space to UV space? For example, if a random dot is placed on a cube, I know this dot local space on a cube, can I get the position of this dot on a cube UV?
Sure!
By finding out what triangle you’re hitting, get the barycentric coordinate for that point on the triangle, and then get the barycentric interpolated UV position. Easy enough to do in c#.
In a shader… that’s … harder.
Realistically there’s no way to get an arbitrary 3D position on a mesh and map that to a UV position in a shader, not unless you pass all of the mesh data to the shader as a custom structured buffer that you iterate over, or find some other way to store the data like in a 3d lookup table (which will suffer from horrible aliasing) or some other approximation if you can map your UV layout to some kind of mathematical function. Technically for the very specific case of a cube, you can use a cube map instead of a normal 2D texture and use the local position coordinate as the cube map UVW. Then the position is the UV and you’re done.
Thx for the answer, I also found this function https://docs.unity3d.com/ScriptReference/RaycastHit-textureCoord.html I think id to what you mentioned.
One more question, really complicated for me. Is it possible to move a vector, in my example WorldSpaceNormal like in ScreenSpace UV without using grab pass or etc? With grab pass, I can just put the material in a second slot, grab the rendered image and add some value to UV coordinated of this grab pass image. But I don’t think it is possible in one render pass, right?
https://prnt.sc/sjvb6d Here is an example cube, and in one pass I want to move these WorldNormals in Up View direction. Like this - https://prnt.sc/sjvgsb I edited it, so you can see more clearly.
Yeah. This is one of those questions that if you have to ask about it, then the answer is effectively no.
Like I said above, technically possible, but super not actually possible without really, really understanding what you’re trying to do. In this case you’d need to know the position of the box you’re rendering, and in the shader for that box raytrace a box that’s the same size, but slightly higher. Totally doable, but if that sentence sounded scary, probably not something you want to tackle immediately.
What if you placed the dot on a texture and moved the texture into the shader rendering the cube’s face? Then anywhere on the texture that does not .rgb = 0 is the place where the dot is. I might not be understanding what you are trying to do tho.