# Drawing many objects with equal perspective distortions

I need to draw different objects in a grid like this (the real meshes will be different, not just the cubes)

1. If I use an orthographic projection, the objects look the same but have no perspective distortion.

1. If I use a perspective projection, I get the distortion but it’s a but different from object to object because they have different positions relative to the camera.

1. If I draw meshes using Graphics.DrawMesh tweaking the camera’s projection matrix, then I get this

``````private void Update()
{
float width = 2f / columns;
float height = 2f / rows;

camera.ResetProjectionMatrix();
Matrix4x4 initialProjectionMatrix = camera.projectionMatrix;

Matrix4x4 objectToWorldMatrix = Matrix4x4.TRS(new Vector3(0, 0, 5), Quaternion.Euler(15, 15, 15), Vector3.one);
int objectLayer = LayerMask.NameToLayer("Default");

for (int column = 0; column < columns; column++)
{
for (int row = 0; row < rows; row++)
{
float x = -1 + column * width + width / 2;
float y = 1 - row * height - height / 2;
camera.projectionMatrix = Matrix4x4.TRS(new Vector3(x, y, 0), Quaternion.identity, Vector3.one) * initialProjectionMatrix;
Graphics.DrawMesh(mesh, objectToWorldMatrix, material, objectLayer, camera, 0, null, false, false);
}
}
}
``````

All the cubes are drawn in the same place on screen, because, I guess, only the last projection matrix is actually used.

1. I could try Graphics.DrawMeshNow, but the docs say I won’t get per-pixel lighting then.

2. The only solution I can think of right now is to use multiple cameras. If I have 4x8 grid, I will need 32 cameras.

Question: Is there a way to draw all the objects with a single camera?

1. I thought I could set the camera’s projection matrix to Identity and inject the actual projection matrix to the objectToWorld matrix:
``````private void Update()
{
float width = 2f / columns;
float height = 2f / rows;

camera.ResetProjectionMatrix();
Matrix4x4 initialProjectionMatrix = camera.projectionMatrix;
camera.projectionMatrix = Matrix4x4.identity;

Matrix4x4 objectToWorldMatrix = Matrix4x4.TRS(new Vector3(0, 0, 5), Quaternion.Euler(15, 15, 15), Vector3.one);
int objectLayer = LayerMask.NameToLayer("Default");

for (int column = 0; column < columns; column++)
{
for (int row = 0; row < rows; row++)
{
float x = -1 + column * width + width / 2;
float y = 1 - row * height - height / 2;
var allInOneMatrix = Matrix4x4.TRS(new Vector3(x, y, 0), Quaternion.identity, Vector3.one)
* initialProjectionMatrix
* objectToWorldMatrix;
Graphics.DrawMesh(mesh, allInOneMatrix, material, objectLayer, camera, 0, null, false, false);
}
}
}
``````

But this way I got no rendering at all. I guess the cubes were just thrown away by frustum culling.

1. I tried to tweak the projection inside the shader, but failed again: http://forum.unity3d.com/threads/208584-How-to-apply-an-offset-in-screen-viewport-space-inside-surface-shader?p=1405687#post1405687

Ok sooooo… i done it.
the trick I’ve employed is to “simply” use a different matrix in the materials vertex program in place of the UNITY_MATRIX_MVP.

The complicated part is getting the right matrix. I’ve posted in several threads before about creating off-axis camera projection matrices (search it), so I’ve put that knowledge to this problem. Then it’s just a case of getting a different skewed matrix for each cube, and sending that to the material.

The sceneviews are messed up because they use the same shader, but a different camera, so the matrices are wrong for them.

It works, but i give no guarantees about lighting and shadows, raycasting will also be wrong.

multiCam.cs:

``````using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class multiCam : MonoBehaviour {
public Transform[] Corners; //These are 3 Transforms at the bottom left, bottom right and top left of the camera frustum
public GameObject theCube; //A prefab cube with the material on it
public Transform cubes; //An empty transform to parent all the cubes to (housekeeping)
public Transform cam; //The camera transform  (probably what this is attatched to)
List<Transform> allCubes; //List of all cubes

void Start () {
cam = transform;
allCubes = new List<Transform>();
for ( int i = 0; i < 8; i++ ) {
for ( int j = 0; j < 4; j++ ) {
GameObject newCube = (GameObject) Instantiate( theCube );
newCube.transform.position = new Vector3( i * 3 - 10.5f, j * 3 - 4.5f, 0 );
newCube.transform.parent = cubes;
}
}
}

void Update () {
foreach ( Transform cube in allCubes ) {
Vector3 myOffsetCameraPos = new Vector3( cube.position.x + cam.position.x, cube.position.y + cam.position.y, cam.position.z );
Vector3 myCameraPos = new Vector3( cube.position.x, cube.position.y, cam.position.z );
Matrix4x4 m = Matrix4x4.TRS( cube.position, cube.rotation, cube.lossyScale );
Matrix4x4 v = Matrix4x4.TRS( myOffsetCameraPos, cam.rotation, new Vector3( 1, 1, -1 ) );
Matrix4x4 p = GetMatrix( myCameraPos );
cube.renderer.material.SetMatrix( "_cam", p * v.inverse * m );
}
}

public Matrix4x4 GetMatrix ( Vector3 atPosition ) {
Vector3 pa, pb, pc;
pa = Corners[0].position;
pb = Corners[1].position;
pc = Corners[2].position;

Vector3 pe = atPosition;// eye position

Vector3 vr = ( pb - pa ).normalized; // right axis of screen
Vector3 vu = ( pc - pa ).normalized; // up axis of screen
Vector3 vn = Vector3.Cross( vr, vu ).normalized; // normal vector of screen

Vector3 va = pa - pe; // from pe to pa
Vector3 vb = pb - pe; // from pe to pb
Vector3 vc = pc - pe; // from pe to pc

float n = camera.nearClipPlane; // distance to the near clip plane (screen)
float f = camera.farClipPlane; // distance of far clipping plane
float d = Vector3.Dot( va, vn ); // distance from eye to screen
float l = Vector3.Dot( vr, va ) * n / d; // distance to left screen edge from the 'center'
float r = Vector3.Dot( vr, vb ) * n / d; // distance to right screen edge from 'center'
float b = Vector3.Dot( vu, va ) * n / d; // distance to bottom screen edge from 'center'
float t = Vector3.Dot( vu, vc ) * n / d; // distance to top screen edge from 'center'

Matrix4x4 p = new Matrix4x4(); // Projection matrix
p[0, 0] = 2.0f * n / ( r - l );
p[0, 2] = ( r + l ) / ( r - l );
p[1, 1] = 2.0f * n / ( t - b );
p[1, 2] = ( t + b ) / ( t - b );
p[2, 2] = ( f + n ) / ( n - f );
p[2, 3] = 2.0f * f * n / ( n - f );
p[3, 2] = -1.0f;

return p;
}
}
``````

In a vert/frag shader:

``````//get the custom matrix
float4x4 _cam;
//in the vert program, instead of o.pos = mul(UNITY_MATRIX_MVP, v.vertex)
o.pos = mul(_cam, v.vertex);
``````

But what’s wrong with the multi-cam approach:

Phew… afternoon was fun.

I think, I will use multiple cameras since it’s the least intrusive way. The problem I had previously was the amount of drawcalls, 30 cameras added ~14 000 extra calls. But then I realized it was caused by gizmos, not the cameras.

After reading this, I wondered: Why? Why did you need this? And why wasn’t “render to texture” the answer?

Render to textures might help but I don’t have Pro. I’m making an inventory for items that are generated procedurally, and the only way to get thumbnails is to render the actual objects at runtime.

In that case you might be able to make some savings with ReadPixels, so that you only get the cost of rendering the geometry, textures, lighting, etc once, then just display a texture2D thereafter (if the inventory can be static - no rotating, changing lighting etc.