Hello everyone, thanks for clicking my question. I’ve been stuck on this for a week now and I’m still really lost. It’s the first question I make so I hope I don’t mess up the format…
I’m coding a Minecraft Skin Editor for Android using Unity 2020.3.30f1 and C# as a language. I have a 3D model of the character with a default skin. The default texture shows up properly on the character so I believe I made the UV unwrapping properly.
I’m currently trying to code the most basic tool for the Skin Editor. Just a 1-pixel brush.
I want the user to be able to modify the texture of the 3D model by clicking on the pixel they want to change.
In my mind, the best case scenario would be achieveing this by using a Box Collider. Since each limb of a Minecraft Skin is a rectangular cube or just a cube.
I tried using RaycastHit.textureCoord but it always returns zero when not used along a Mesh Collider.
I feel like a Mesh Collider would be a bit overkill since the entire 3D model has 8.064 triangles.
It would be the only 3D model appearing on the entire program so it may not be that bad, but my optimization instincts tell me there’s a better way to do it.
In my previous attempts I manged to colour the back of the character. No matter where I clicked, it would colour a very small section in the back. I tried scaling everything down, like 0.001 or something like that, and that seemed to increase the area where all the clicks would go to. So, I feel like the problem is something related to scale but I have no idea man.
This is the code I have so far:
private void drawOnHit(RaycastHit hit)
{
// Get the clicked object's texture and assign it to a local variable
Texture2D texture = hit.collider.GetComponent<Renderer>().material.mainTexture as Texture2D;
if (texture == null)
{
Debug.LogWarning("No texture found on object.");
return;
}
// Get the local position of the clicked point relative to the center of the texture
Vector3 localPos = hit.collider.transform.InverseTransformPoint(hit.point);
// Calculate the UV coordinates based on the difference in size between the texture and the collider bounds
float widthRatio = texture.width / hit.collider.bounds.size.x;
float heightRatio = texture.height / hit.collider.bounds.size.y;
Vector2 uv = new Vector2(
Mathf.Clamp01((localPos.x + hit.collider.bounds.extents.x) * widthRatio),
Mathf.Clamp01((localPos.y + hit.collider.bounds.extents.y) * heightRatio));
Debug.Log("localPos is = " + localPos);
Debug.Log("localPos.x is = " + localPos.x);
Debug.Log("localPos.y is = " + localPos.y);
Debug.Log("hit.collider.bounds = " + hit.collider.bounds);
Debug.Log("hit.collider.bounds.extents.x = " + hit.collider.bounds.extents.x);
Debug.Log("hit.collider.bounds.extents.y = " + hit.collider.bounds.extents.y);
Debug.Log("hit.collider.bounds.size.x = " + hit.collider.bounds.size.x);
Debug.Log("hit.collider.bounds.size.y = " + hit.collider.bounds.size.y);
// Convert UV coordinates to pixel coordinates
int x = Mathf.RoundToInt(uv.x * texture.width);
int y = Mathf.RoundToInt(uv.y * texture.height);
Debug.Log("Pixel coordinates: (" + x + ", " + y + ")");
if (x >= 0 && x < texture.width && y >= 0 && y < texture.height)
{
texture.SetPixel(x, y, color);
texture.Apply();
hit.collider.GetComponent<Renderer>().material.mainTexture = texture;
rawImage.texture = texture;
}
}
I added debug logs almost everywhere to try and figure out what’s going on, but I can’t wrap my head around it. This is what the console prints after I click one of the arms of the character:
localPos is = (0.0, 0.0, 0.0)
localPos.x is = -0,007495117
localPos.y is = -0,001375005
hit.collider.bounds = Center: (1002.5, 6.0, -20.0), Extents: (1.5, 0.6, 0.6)
hit.collider.bounds.extents.x = 1,549988
hit.collider.bounds.extents.y = 0,5500007
hit.collider.bounds.size.x = 3,099976
hit.collider.bounds.size.y = 1,100001
Pixel coordinates: (64, 64)
The texture has a resolution of 64x64.
In previous attempts, without scaling everything down, the Pixel Coordinates would be a number between 29 and 33.
After I scaled everything down to a size of 0.001 or similar (maybe more or less zeros) the texture coordinates would expand their… “scope”. But they still wouldn’t do the intented.
Thanks for reading this far. I highly appreciate your time. I hope your knowledge will enlighten my mind.
PS: Here’s the code for the raycast in case it’s important aswell:
public void castRayOnClick(LayerMask layerMask)
{
Vector3 mousePos = Input.mousePosition;
mousePos.z = 10f;
mousePos = cam.ScreenToWorldPoint(mousePos);
Debug.DrawRay(transform.position, mousePos - transform.position, Color.blue);
if (Input.GetMouseButtonDown(0))
{
Ray ray = cam.ScreenPointToRay(Input.mousePosition);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, 100, mask, QueryTriggerInteraction.Collide))
{
Debug.Log(hit.transform.name);
drawOnHit(hit);
Debug.Log("castRayOnClick ended");
//Here you can implement what you want the click to do
}
}
}