- I’m trying to learn more about rendering process and I’m trying to create the matrices from scratch. And this is how I made them:
public static Matrix4 Translation(Vector3 position)
{
return new Matrix4(
new Vector4(1, 0, 0, 0),
new Vector4(0, 1, 0, 0),
new Vector4(0, 0, 1, 0),
new Vector4(position, 1),
MatrixCtor.Line);
}
public static Matrix4 Scalation(Vector3 scale)
{
return new Matrix4(
new Vector4(scale.x, 0, 0, 0),
new Vector4(0, scale.y, 0, 0),
new Vector4(0, 0, scale.z, 0),
new Vector4(0, 0, 0, 1),
MatrixCtor.Line);
}
The constructor for the custom class Matrix4 uses 4 vectors and an enum that specifies if the vectors are meant to be lines or columns.
So
Matrix.Translation(new Vector3(2, 2, 3))
will return:
| 1 0 0 0 |
| 0 1 0 0 |
| 0 0 1 0 |
| 2 2 3 1 |
The same for scaling.
I didn’t implement the rotation yet because I want a better understanding of quaternions first.
Now to the view matrix this is how I defined it:
public static Matrix4 Camera(Vector3 eye, Vector3 at, Vector3 up)
{
Matrix4 view = Matrix4.Identity;
Vector3 zaxis = (eye - at).normalized;
Vector3 xaxis = Vector3.Cross(up, zaxis).normalized;
Vector3 yaxis = Vector3.Cross(zaxis, xaxis);
return new Matrix4(new Vector4(xaxis, Vector3.Dot(xaxis, eye)),
new Vector4(yaxis, Vector3.Dot(yaxis, eye)),
new Vector4(zaxis, Vector3.Dot(zaxis, eye)),
new Vector4(0, 0, 0, 1), MatrixCtor.Column);
}
Now since I’m using row-vector multiplication I don’t know if MatrixCtor shoud be Column or Line, but I tried both of them.
For perspective projection I used:
public static Matrix4 Perspective(float aspect, float near, float far, float fov)
{
Matrix4x4 perspective = Matrix4.Null;
perspective.M00 = 1f / aspect * (float)Mathf.Tan(fov / 2f);
perspective.M11 = 1f / (float)Mathf.Tan(fov/2f);
perspective.M22 = -(float)(far + near) / (float)(far - near);
perspective.M32 = -1f;
perspective.M23 = (-2f * far * near) / (float)(far - near);
return perspective;
}
Now I did the perspective division:
Vector3[] vertices = new Vector3[36]
{
... VAO coordinates for a cube, I didn't implemented yet indexing
}
Matrix4 Pos = Matrix4.Translation(new Vector3(0, 0, 0));
Matrix4 Sc = Matrix4.Scalation(new Vector3(1, 1, 1));
Matrix4 M = Pos * Sc;
Matrix4 V = Matrix4x4.Camera(new Vector3(0, 0, -10), new Vector3(0, 0, 1), new Vector3(0, 1, 0));
Matrix4 P = Matrix4x4.Perspective(1, 0.03f, 1000f, 40f);
Matrix4 MVP = M * V * P; //row-vector order
for (int i = 0; i < 36; i++)
{
Vector4 position = new Vector4(vertices[i], 1) * MVP; //row-vector order
position.x /= position.w;
position.y /= position.w;
position.z /= position.w;
screen[i] = new Vector2(position.x , position.y);
screen[i].ToScreen();
}
Project(screen)
Now from NDC coordinates I tried to remap to screen coordinates:
public static Vector2 ToScreen()
{
return new Vector2((x + 1) * bitmap.Width / 2, Math.Abs((y + 1) * bitmap.Height / 2 - bitmap.Height));
}
After this I construct a bitmap by rasterizing the pixels and then display it in an unity Texture2D.
Something is wrong in this process, my question is how to test each step and see where is the problem and the second question why I see on every website that x, y, z clip space coords will be divided by w to obtain NDC if anyway I loose the z coordinate when I convert to screen coodinates.