I assume I have to make some calculation to figure out the aspect ratio of the mesh and how much space the mesh is taking onscreen?(position, scale and distance to camera??)
And then somehow I have to send these calculations to the shader to correctly tile the texture on the mesh?
From what you’ve described above, you seem to be simply saying that you’re using the whole texture surface in the mesh so this is why; nothing to do with aspect ratio.
Are you not using the correct texture coordinates (UV) in the mesh? You calculate that regions’ UV coordinates and use them.
Particularly I only know how to edit this in Blender, but in Unity a simple(not precise) way, would be to modify the materials offsets. As you can set cut level, and offset to a particular section of the original image. But this doesn’t work well if needing many parts of said image as you’d have to make a material for each different object to render it.
I think there is a way to set UVs with code, in Unity, but sounds complicated
I don’t have the details of your implementation so I have no idea what part of Unity you’re specifically asking about. You’re just asking broad questions and expecting an answer. You hinted at ShaderGraph hence being asked about it. Your original post was in scripting but you’re asking graphics questions (Mesh). I provided the API on how to set the UV. Now you want “math” but I have no details about where you are with it.
Provide some specific details for us to plug in the specific details you require.
Provide some implementation details not a description.
First, do you know what UVs are? If not then it’s better to say! I provided a link that gave you an exact API. You can use this to specify what part of the texture the vertices of a mesh use. This is what UVs are so I’m not sure why I’d need to provide more detail if you know what UVs are.
Given a Mesh which has a material that is presenting the texture, you can set the vertex UVs to specify what part of the texture each is assigned to. In this way you’re selecting what part of the texture is being used i.e. a “window” on the texture. This is just UV stuff.
You’re asking me how to use it? That’s in the scripting reference.
If you’re asking me what values to put in then how could I possibly know that? As I’ve said above, I don’t have any details of your project (just a vague description) so I cannot give you values or variables or calculations to use.
well that was the whole point of the post, there must be some math that works for any size mesh and gets the corresponding UVs. If you dont know what to put in the UVs they i know even less.
Should I make a new thread, this time dedicated on finding the UV values?
Do you know what UVs are? I asked above. UVs relate to the texture size, not the mesh “size”. As the docs state, they’re normalized texture coordinates.
In fear of this becoming a tutorial: Given a texture, 0.5, 0.5 is the center of the texture. With your texture, you have a coordinate of what you want, you need to turn that into texture coordinates.
No, you don’t need another thread but in the end, it’s up to you.
you are not obliged to give me a tutorial but Im thankful for it nonentheless
Im sure that given a position of a rectangular area inside of the view of the camera there must be some way of get its rect in screen space, I feel like it involves a lot of math and code to find it
So are you trying to make this work specifically for a rectangular mesh that is always facing the screen? Or is this something that needs to be more general and work for any mesh that could exist in 3D space? Also, is the screenshot taken of the full screen or just some part of it? Depending on your answer this can range from pretty simple to pretty complex
The only part I really need of the screen is the part that is going to be displaying in the mesh
(anyway its always going to be the region occupied by the mesh in the camera view.)
The mesh is always a rectangle, but it might change position and sometimes have different scale to be a square/ or wider than it is tall, or be on a different position on the screen
In that case you have it really really easy. Just get the screen position of each vert in the quad and convert it to a percentage of the screen’s size. That is your UV coord for that vertex.
Off the top of my head some pseudo code would be something like:
Camera camera;
var trans = QuadObject.transform;
var filter = QuadObject.GetComponent<MeshFilter>();
var mesh = filter.mesh //or use sharedMesh if that is more appropriate in this case
var verts = mesh.verticies;
var uvs = new Vector2[verts.length];
for(int i = 0; i < verts.Length; i++)
{
//transform the vert to world space, then screen space in pixels, then finally a percentage of the screen's space in pixels.
var worldVert = transform.TransformPoint(verts[i]);
var screenVert = camera.WorldToScreenPoint(worldVert);
uvs[i] = new Vector2(
screenVert.x / camera.pixelWidth,
screenVert.y / camera.pixelHeight
);
}
//assign calculated uvs back to the mesh
mesh.uv = uvs;
This code might not be 110% right. It’s been a while since I had to do any mesh manipulation so I might have gotten some details wrong or the direction of the UVs might be flipped but it should get your started. heck, it might even work on the first try!