Hi, I realized that the normals from Meshes generated through Unity’s VHACD plugin are quite unusable. I wonder if this is intended or not, and if it’s just me. I have the last version from the Package Manager.
Here is the difference between a default Cube, and a Convex Hull generated with VHACD of that Cube. Left side is VHACD, right side is default Cube:
Vertex Normals:
Face Normals:
By the looks of it, the vertices have not been generated in the way that would produce usable normals (unless one wants rounded corners), so VHACD is generating shared vertices instead of unique vertices Reference
The process of generating the mesh was simply calling vhacd.GenerateConvexMeshes(); and assigning the resulting mesh to a MeshFilter, plus calling RecalculateNormals as the normals array comes out empty by default.
The code to generate the colored rays is as follows:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CheckNormalsAndVertices : MonoBehaviour
{
MeshFilter meshFilter;
Mesh mesh;
public bool displayVertexNormals = false;
public bool displayFaceNormals = true;
// Start is called before the first frame update
void OnEnable()
{
mesh = GetComponent<MeshFilter>().sharedMesh;
}
// Update is called once per frame
void Update()
{
if (mesh.normals.Length == 0)
mesh.RecalculateNormals();
var triangleIndices = mesh.triangles;
var vertices = mesh.vertices;
var normals = mesh.normals;
Debug.Log($"{transform.name} : TRIANGLE COUNT {triangleIndices.Length} : VERTEX COUNT {vertices.Length} : NORMALS COUNT {normals.Length}");
for (var i = 0; i < triangleIndices.Length; i += 3)
{
var p1 = vertices[triangleIndices[i]];
var p2 = vertices[triangleIndices[i + 1]];
var p3 = vertices[triangleIndices[i + 2]];
var pCenter = (p1 + p2 + p3) / 3f;
var n1 = normals[triangleIndices[i]];
var n2 = normals[triangleIndices[i + 1]];
var n3 = normals[triangleIndices[i + 2]];
var normal = (n1 + n2 + n3) / 3f;
if (displayVertexNormals)
{
var trP1 = transform.TransformPoint(p1);
var trP2 = transform.TransformPoint(p2);
var trP3 = transform.TransformPoint(p3);
var trN1 = transform.TransformVector(n1);
var trN2 = transform.TransformVector(n2);
var trN3 = transform.TransformVector(n3);
Debug.DrawRay(trP1, trN1.normalized, Color.red);
Debug.DrawRay(trP2, trN2.normalized, Color.green);
Debug.DrawRay(trP3, trN3.normalized, Color.cyan);
}
var transformedVertex = transform.TransformPoint(pCenter);
var transformedNormal = transform.TransformVector(normal);
if (displayFaceNormals)
Debug.DrawRay(transformedVertex, transformedNormal.normalized, Color.yellow);
}
}
}
Is this all intended or something that slipped? If intended, why? I imagine it is to reduce the number of vertices as the assumed use-case is for non-visual stuff, but in my case I actually do need correct face normals for non-visual stuff. Maybe a setting to decide if getting shared or unique vertices would be nice?
Thank you!
[EDIT: Managed to get the right normals with this workaround, but it would be much nicer if I could get the unique vertices with their correct normals in the mesh data, and I’m not wrapping my head around a function to do that automatically right now)]
var n1 = Vector3.Cross(p2 - p1, p3 - p1);
var n2 = Vector3.Cross(p3 - p2, p1 - p2);
var n3 = Vector3.Cross(p1 - p3, p2 - p3);