Click on 3D model and Draw on it's texture using Box Collider and RaycastHit

Hello everyone, thanks for clicking my question. I’ve been stuck on this for a week now and I’m still really lost. It’s the first question I make so I hope I don’t mess up the format…

I’m coding a Minecraft Skin Editor for Android using Unity 2020.3.30f1 and C# as a language. I have a 3D model of the character with a default skin. The default texture shows up properly on the character so I believe I made the UV unwrapping properly.

I’m currently trying to code the most basic tool for the Skin Editor. Just a 1-pixel brush.
I want the user to be able to modify the texture of the 3D model by clicking on the pixel they want to change.
In my mind, the best case scenario would be achieveing this by using a Box Collider. Since each limb of a Minecraft Skin is a rectangular cube or just a cube.

I tried using RaycastHit.textureCoord but it always returns zero when not used along a Mesh Collider.
I feel like a Mesh Collider would be a bit overkill since the entire 3D model has 8.064 triangles.
It would be the only 3D model appearing on the entire program so it may not be that bad, but my optimization instincts tell me there’s a better way to do it.

In my previous attempts I manged to colour the back of the character. No matter where I clicked, it would colour a very small section in the back. I tried scaling everything down, like 0.001 or something like that, and that seemed to increase the area where all the clicks would go to. So, I feel like the problem is something related to scale but I have no idea man.

This is the code I have so far:

 private void drawOnHit(RaycastHit hit)
            // Get the clicked object's texture and assign it to a local variable
            Texture2D texture = hit.collider.GetComponent<Renderer>().material.mainTexture as Texture2D;
            if (texture == null)
                Debug.LogWarning("No texture found on object.");
            // Get the local position of the clicked point relative to the center of the texture
            Vector3 localPos = hit.collider.transform.InverseTransformPoint(hit.point);
            // Calculate the UV coordinates based on the difference in size between the texture and the collider bounds
            float widthRatio = texture.width / hit.collider.bounds.size.x;
            float heightRatio = texture.height / hit.collider.bounds.size.y;
            Vector2 uv = new Vector2(
                Mathf.Clamp01((localPos.x + hit.collider.bounds.extents.x) * widthRatio),
                Mathf.Clamp01((localPos.y + hit.collider.bounds.extents.y) * heightRatio));
            Debug.Log("localPos is = " + localPos);
            Debug.Log("localPos.x is = " + localPos.x);
            Debug.Log("localPos.y is = " + localPos.y);
            Debug.Log("hit.collider.bounds = " + hit.collider.bounds);
            Debug.Log("hit.collider.bounds.extents.x = " + hit.collider.bounds.extents.x);
            Debug.Log("hit.collider.bounds.extents.y = " + hit.collider.bounds.extents.y);
            Debug.Log("hit.collider.bounds.size.x = " + hit.collider.bounds.size.x);
            Debug.Log("hit.collider.bounds.size.y = " + hit.collider.bounds.size.y);
            // Convert UV coordinates to pixel coordinates
            int x = Mathf.RoundToInt(uv.x * texture.width);
            int y = Mathf.RoundToInt(uv.y * texture.height);
            Debug.Log("Pixel coordinates: (" + x + ", " + y + ")");
            if (x >= 0 && x < texture.width && y >= 0 && y < texture.height)
                texture.SetPixel(x, y, color);
                hit.collider.GetComponent<Renderer>().material.mainTexture = texture;
                rawImage.texture = texture;

I added debug logs almost everywhere to try and figure out what’s going on, but I can’t wrap my head around it. This is what the console prints after I click one of the arms of the character:

localPos is = (0.0, 0.0, 0.0)
localPos.x is = -0,007495117
localPos.y is = -0,001375005
hit.collider.bounds = Center: (1002.5, 6.0, -20.0), Extents: (1.5, 0.6, 0.6)
hit.collider.bounds.extents.x = 1,549988
hit.collider.bounds.extents.y = 0,5500007
hit.collider.bounds.size.x = 3,099976
hit.collider.bounds.size.y = 1,100001
Pixel coordinates: (64, 64)

The texture has a resolution of 64x64.
In previous attempts, without scaling everything down, the Pixel Coordinates would be a number between 29 and 33.
After I scaled everything down to a size of 0.001 or similar (maybe more or less zeros) the texture coordinates would expand their… “scope”. But they still wouldn’t do the intented.

Thanks for reading this far. I highly appreciate your time. I hope your knowledge will enlighten my mind.

PS: Here’s the code for the raycast in case it’s important aswell:

public void castRayOnClick(LayerMask layerMask) 
        Vector3 mousePos = Input.mousePosition;
        mousePos.z = 10f;
        mousePos = cam.ScreenToWorldPoint(mousePos);
        Debug.DrawRay(transform.position, mousePos - transform.position,;

        if (Input.GetMouseButtonDown(0))
            Ray ray = cam.ScreenPointToRay(Input.mousePosition);
            RaycastHit hit;

            if (Physics.Raycast(ray, out hit, 100, mask, QueryTriggerInteraction.Collide))

                Debug.Log("castRayOnClick ended");
                //Here you can implement what you want the click to do

Hey @Nikorin,

Sounds like a fun project, hope I can help a little.

I think you may have a wrong interpretation about the following line of code (or I might)…
Vector3 localPos = hit.collider.transform.InverseTransformPoint(hit.point);

You mention that this is the, “local position of the clicked point relative to the center of the texture”.

My understanding is this is the clicked point in local space relative to the hit colliders transform, or in this case, the point relative to your 3D models origin (or center), not the texture. So unless you write a mapping system for each box collider (limb, torso, head, etc.) to the correct location UV coordinates on the texture, this may not work out of the box so easily.

“I tried using RaycastHit.textureCoord but it always returns zero when not used along a Mesh Collider”

This makes sense because the MeshCollider is using your 3D model which has UV mappings. Since a generic box collider is not going to be used for any texturing (just used for physics), the UVs are either unset, or set to zero.

“but my optimization instincts tell me there’s a better way to do it”

Totally get you, but I would ask this instead, why is the Minecraft character model so high poly (8,064 polygons sounds crazy for Steve). Personally this is where I would attack from an optimization standpoint. If you have a simpler model, then the MeshCollider raycasts wont be as expensive. realistically Steve has 6 body parts (I think). So 6 body parts x 6 sides x 2 triangles per side is 72 total polygons? So Personally this is where I would attack in terms of optimization because getting the UV coords out of the RaycastHit is very nice and is going to save you time and a headache.

I’ll close with one more thought. Does it work with the 8k poly model? Like does it melt your device or cause unbearable FPS? It’s usually a good idea to get something fully working then come back around and optimize. You might run into some other problems along the way that might, for example, cause issues with your custom box collider solution you are working on.

Having a fully working solution that is poorly optimized is way better than having an unfinished product but with some cool optimization tricks.

Also, here is a tutorial by Cat Like Coding that is about procedural meshes. Somewhat unrelated, but he goes into the anatomy of 3D meshes a bit (normals, UV mappings, triangles, etc.). Might be good base knowledge: Creating a Mesh

Best of luck, hope this is somewhat useful!