I need to draw different objects in a grid like this (the real meshes will be different, not just the cubes)

If I use an orthographic projection, the objects look the same but have no perspective distortion.

If I use a perspective projection, I get the distortion but it’s a but different from object to object because they have different positions relative to the camera.

If I draw meshes using Graphics.DrawMesh tweaking the camera’s projection matrix, then I get this

Ok sooooo… i done it.
the trick I’ve employed is to “simply” use a different matrix in the materials vertex program in place of the UNITY_MATRIX_MVP.

The complicated part is getting the right matrix. I’ve posted in several threads before about creating off-axis camera projection matrices (search it), so I’ve put that knowledge to this problem. Then it’s just a case of getting a different skewed matrix for each cube, and sending that to the material.

The sceneviews are messed up because they use the same shader, but a different camera, so the matrices are wrong for them.

It works, but i give no guarantees about lighting and shadows, raycasting will also be wrong.

multiCam.cs:

using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class multiCam : MonoBehaviour {
public Transform[] Corners; //These are 3 Transforms at the bottom left, bottom right and top left of the camera frustum
public GameObject theCube; //A prefab cube with the material on it
public Transform cubes; //An empty transform to parent all the cubes to (housekeeping)
public Transform cam; //The camera transform (probably what this is attatched to)
List<Transform> allCubes; //List of all cubes
void Start () {
cam = transform;
allCubes = new List<Transform>();
for ( int i = 0; i < 8; i++ ) {
for ( int j = 0; j < 4; j++ ) {
GameObject newCube = (GameObject) Instantiate( theCube );
newCube.transform.position = new Vector3( i * 3 - 10.5f, j * 3 - 4.5f, 0 );
newCube.transform.parent = cubes;
allCubes.Add( newCube.transform );
}
}
}
void Update () {
foreach ( Transform cube in allCubes ) {
Vector3 myOffsetCameraPos = new Vector3( cube.position.x + cam.position.x, cube.position.y + cam.position.y, cam.position.z );
Vector3 myCameraPos = new Vector3( cube.position.x, cube.position.y, cam.position.z );
Matrix4x4 m = Matrix4x4.TRS( cube.position, cube.rotation, cube.lossyScale );
Matrix4x4 v = Matrix4x4.TRS( myOffsetCameraPos, cam.rotation, new Vector3( 1, 1, -1 ) );
Matrix4x4 p = GetMatrix( myCameraPos );
cube.renderer.material.SetMatrix( "_cam", p * v.inverse * m );
}
}
public Matrix4x4 GetMatrix ( Vector3 atPosition ) {
Vector3 pa, pb, pc;
pa = Corners[0].position;
pb = Corners[1].position;
pc = Corners[2].position;
Vector3 pe = atPosition;// eye position
Vector3 vr = ( pb - pa ).normalized; // right axis of screen
Vector3 vu = ( pc - pa ).normalized; // up axis of screen
Vector3 vn = Vector3.Cross( vr, vu ).normalized; // normal vector of screen
Vector3 va = pa - pe; // from pe to pa
Vector3 vb = pb - pe; // from pe to pb
Vector3 vc = pc - pe; // from pe to pc
float n = camera.nearClipPlane; // distance to the near clip plane (screen)
float f = camera.farClipPlane; // distance of far clipping plane
float d = Vector3.Dot( va, vn ); // distance from eye to screen
float l = Vector3.Dot( vr, va ) * n / d; // distance to left screen edge from the 'center'
float r = Vector3.Dot( vr, vb ) * n / d; // distance to right screen edge from 'center'
float b = Vector3.Dot( vu, va ) * n / d; // distance to bottom screen edge from 'center'
float t = Vector3.Dot( vu, vc ) * n / d; // distance to top screen edge from 'center'
Matrix4x4 p = new Matrix4x4(); // Projection matrix
p[0, 0] = 2.0f * n / ( r - l );
p[0, 2] = ( r + l ) / ( r - l );
p[1, 1] = 2.0f * n / ( t - b );
p[1, 2] = ( t + b ) / ( t - b );
p[2, 2] = ( f + n ) / ( n - f );
p[2, 3] = 2.0f * f * n / ( n - f );
p[3, 2] = -1.0f;
return p;
}
}

In a vert/frag shader:

//get the custom matrix
float4x4 _cam;
//in the vert program, instead of o.pos = mul(UNITY_MATRIX_MVP, v.vertex)
o.pos = mul(_cam, v.vertex);

I think, I will use multiple cameras since it’s the least intrusive way. The problem I had previously was the amount of drawcalls, 30 cameras added ~14 000 extra calls. But then I realized it was caused by gizmos, not the cameras.

Render to textures might help but I don’t have Pro. I’m making an inventory for items that are generated procedurally, and the only way to get thumbnails is to render the actual objects at runtime.

In that case you might be able to make some savings with ReadPixels, so that you only get the cost of rendering the geometry, textures, lighting, etc once, then just display a texture2D thereafter (if the inventory can be static - no rotating, changing lighting etc.