Creating a plane from a plane equation

I have some code that is returning a plane equation in this form:

A(x - x0) + B(y - y0) + C(z - z0) = 0

where the normal vector is (A, B, C) and a point on the plane is (x0, y0, z0). Currently I have code that can create a plane of some width and height here:

public static GameObject createPlane(float width, float height)
    {
        GameObject go = new GameObject("Plane");
        MeshFilter mf = go.AddComponent(typeof(MeshFilter)) as MeshFilter;
        MeshRenderer mr = go.AddComponent(typeof(MeshRenderer)) as MeshRenderer;

        Mesh m = new Mesh();
        m.vertices = new Vector3[]
        {
            new Vector3(0, 0, 0),
            new Vector3(width, 0, 0),
            new Vector3(width, height, 0),
            new Vector3(0, height, 0)
        };

        m.uv = new Vector2[]
        {
            new Vector2(0,0),
            new Vector2(0,1),
            new Vector2(1,1),
            new Vector2(1,0)
        };

        mf.mesh = m;

        m.RecalculateBounds();
        m.RecalculateNormals();

        return go;
    }

The problem that I’m having with this code is that I’m unsure of how to change the plane’s normal. I attempted to use the mesh.normals array to change the normal, but whenever I made changes to that I never saw those changes reflected in unity. All I found out is that it seems the mesh.normals has to be the same size as mesh.vertices. I would understand if it needed to have 2 vectors in this case as it seems that the plane is being created by 2 triangles, but I don’t understand why each vertex would need a vector.

In modern, realtime 3D engines (for example, game engines), you will generally (always) see a normal on every vertex. None are strictly defined for any given edge or face, however.

Why is this?

In the case of Unity’s shaders, the vertex positions and normals are automatically being interpolated to determine the new normal direction at any given pixel. Because all meshes are comprised of triangles (the only perfectly logical shape for 3D space), performing this interpolation will always be reliable.

This means, in part, that their usage and application to a mesh is entirely situational.

In the case of a sphere, you don’t want to see any edges. You want a smooth shape all around. In order to achieve this, each normal points directly outwards from the center of the sphere.

In the case of a cube, you need hard, crisp edges. If your vertex positions and normal directions are aligned in their arrays (which results in a low time investment to utilize that data), then this means you’ll have 3 vertices per corner, for a 24-vertex cube. To try for an 8-vertex cube would mean dealing with a much more complex data structure and rendering pipeline, since you would now need multiple instruction sets on which vertices are associated with which normals.

But back to the matter at hand, why not use face normals instead?

Well, look at it this way:

Where is the center of the face?

How do you define the difference between a rounded edge and a hard edge?

What if you want two corners of a triangle to face straight outward and the remaining one to curve and blend into neighboring faces?

Vertex normals are able to provide a clear indication of how the appearance of a mesh will be faked (remember, it’s still made of hard-edged triangles in reality). They specifically don’t leave room for ambiguity, instead permitting reliable and reproducible calculations of smoothing on a flat surface.

Edit: Well, what about sharing vertices, then? Why aren’t all vertices separated per triangle?

At this point, it’s left up to the rendering system and the best-case scenario for efficiency. The most efficient option is to have aligned arrays containing vertex and normal data. Therefore, when further normal information is not needed (on a rounded corner, such as the entirety of a sphere), there’s no longer a need for duplicate data at that point. In the case of the cube, however, you have multiple normals sharing a position, so the sane, if slightly greater memory use, solution is to separate the vertices to match, keeping the arrays aligned.

This is for triangulated meshes in unity, I don’t know if it’s exactly the same for quads:
A triangle (mesh.triangles) is defined by three indices (one for each point). With a single index you access the components of a point from the vertex, normal and uv array. For example a triangle is defined by the 3 indices {7, 8, 9}, then point P1 has the components mesh.vertices[7], mesh.normals[7] and mesh.uv[7], P2 has mesh.vertices[8], mesh.normals[8], … So a triangle is defined by 3 vertices, 3 uvs and 3 normals and so the three arrays (vertices, normals, uv) have to be of the same length. If you want to use the triangle normal for a face, you can just assign this (same) normal to each of its points. For a quad use a normal array with length = 4 and set each normal to the value of your face normal.

Well, for this problem you haven’t specified enough information. You have to distinguish between an infinite mathematical plane which your plane equation represents and a finite plane mesh.

A mathematical plane (since it’s infinite) doesn’t have a position nor a specific orientation around it’s normal. It only has a certain distance from the origin. That’s why you usually define a plane in the “general form” instead the “point-normal form”.

The point on the plane you already have can be used to specify the location of your finite plane in space. However you’re still missing some information how you want that quad mesh be rotated around the normal. If you don’t care about the rotation you can simply use one of the world axis as base.

In general all you need to calculate two orthogonal vectors to your normal vector is the cross product.

So to get the first vector you would simply do:

Vector3 v1 = Vector3.Cross(normal, Vector3.up);
Vector3 v2 = Vector3.Cross(normal, v1);

v1 *= width*0.5f;
v2 *= height*0.5f;

// your vertex points:

Vector3 p1 = center + v1 + v2;
Vector3 p2 = center - v1 + v2;
Vector3 p3 = center - v1 - v2;
Vector3 p4 = center + v1 - v2;

This gives you 4 vertex points which define a plane that has a size of width * height. “center” is the reference point on the plane.

Note: If the normal vector is very close or equal to Vector3.up this won’t work as the cross product would return an invalid vector. So if the normal is too close to Vector3.up you can simply use a different axis:

Vector3 v1;
if (Vector3.Dot(normal, Vector3.up) > 0.8f)
    v1 = Vector3.Cross(normal, Vector3.right);
else
    v1 = Vector3.Cross(normal, Vector3.up);

So @TSLzipper, you forgot to set your triangles ;)…
m.triangles = new int { 0, 1, 2, 2, 3, 0 };
But What @Bunny83 mentioned is more scalable and robust: But if you never set your tris, you won’t see anything regardless ;). To answer the question about how to flip your normals, I would reference Bunny83’s comment, and just setup a Vector3 normal = Vector3.back; or as Vector3.forward if you set your tris differently, like: 2, 1, 0, 0, 3, 2 etc. But there are several ways to set your them. So once you have them all 1 direction, then flip between back and forward as needed. :stuck_out_tongue: