I’m using a quad set to the dimensions of an orthographic camera’s viewable area with the following trick:
mesh.verts = new Vector3[ ]
{
_finalCam.ViewportToWorldPoint(new Vector3(1,1,1)),
//do this for the other three corners…(1,0,1),(0,1,1),(0,0,1)
}
I’m drawing the “backbuffer” of composited rendertextures onto that quad.
When the game screen window is set to 1600x900, our “native” res, and the res of the rendertargets, everything shows up great, the quad is the size of the screen, and the backbuffer image is completely visible.
However, if I shrink the resolution of the game (to say 800x450), everything doesn’t shrink down by half like I expect. The quad’s uv coordinates extend from 0 to 1 across the polygon, so I would expect to see the entire backbuffer image on a quad that still occupies (exactly) the entire screen. Instead I see about half of the texture on a quad that only occupies the left half of the game window’s 800x450 area.
Does anyone know what I’m not understanding about the relative sizes here? I’m expecting the rendertarget backbuffer texture to just scale down to fit the quad’s uv settings. In fact, if I play the game at 1600x900 but set the quad to the upper right quadrant instead of the whole screen (replace those 0’s with .5f’s), the texture does scale down just fine.
So why does altering the actual game resolution mess this up?