This is an unexpected behavior for me… I have two cameras in identical spots, one renders to a texture, while the second one renders the scene, and then uses the rendered texture as a full-screen overlay to do some special effects.
I noticed that the elements in the render to texture camera would move normally at run time on the vertical axis of the framebuffer, but that as I change the viewport aspect ratio of the rendering window, the elements move at a rate of change about equal to the aspect ratio of the horizontal to vertical axis. (i.e. with a square render target and a square viewport, they match up pretty closely… if I then double the viewport width, the elements move about twice as fast across the horizontal axis as they should).
When I changed the render to texture size, it actually changed the aspect ratio of the resultant image!
This implies to me that the render to target texture is actually forcing an aspect ratio change on the camera that is writing to it. Please tell me this is overrideable… I know from experience that you often want powers of 2 in your textures, but this certainly shouldn’t be affecting the aspect ratio of the camera that is writing to the texture (in actuality, I presume it’s just writing each pixel and rendering the appropriate vector from the eye with a 1:1 correlation, but this still destroys the functionality for me in a lot of cases I had hoped to use rendered textures for)
The trick here is to create your render texture from code - then you can match your screen resolution exactly, rather than having to scale up to the next power-of-two…
Is there ever a time where you want your rendertexture to have a different aspect ratio than the camera you used to shoot it?
In all the examples of this in the past that I’ve used, I’ve never run into a case of this… however, I do often write 1280 x 720 aspect ratio images to targets that are powers of two…
Curious on this one, because the details seem inverted to me.
Basically, there are 2 use cases for rendertextures:
Image Effects - those you typically do by implementing OnRenderImage (take a look in the Pro Standard Assets for source code). Here you do everything from code.
Textures in-game. Think of a video surveillance camera - a camera rendering into a texture, putting that on geometry. Here, the rendertexture screen aspect can be completely different
Ok, so I have my camera creating a rendertexture at start.
Wierdly, it doesn’t work, until I go into the debug parameters for the object that references the rendertexture (and draws it to the screen during the overlay pass), and click on the “_mainTex” field.
Then it picks up the reference and starts drawing it.
I’m assuming that I need to connect the object to this texture at start, but the following doesn’t seem to take care of it:
(This script is attached to the fullscreen rect object).
function Start () {
var blobtexture : RenderTexture = GameObject.Find("FXCamera").camera.targetTexture;
renderer.material.SetTexture("_MainTex", blobtexture);
}
I also tried pushing it onto the screenrect from the camera initialization (this script being attached to the FXcamera object):
function Start()
{
renderTexture = RenderTexture(Screen.width / 2, Screen.height / 2, 0);
renderTexture.isPowerOfTwo = false;
camera.targetTexture = renderTexture;
camera.depth = -10; // force draw earlier than main camera
GameObject.Find("ScreenRect").renderer.material.mainTexture = renderTexture;
}
I’m doing something wrong… still wrapping my head around how these classes talk to each other.
I found that I could set the resolution in my RenderTexture to match the intended aspect ratio of my camera by switching the editor to Debug mode, setting the variables, and then switching back to normal mode.
Hope this idea helps; and please tell me if you know of future problems which await from this solution.
Thanks for the thread guys, wouldn’t have tried such a thing if I hadn’t seen you could get the same results by creating the RenderTexture from code
since it took me a while to figure out what Aras meant, figured I’d spell it out for anyone else who find this. First I set the camera ratio to 1, because my objects were being skewed for non-square RenderTextures. This had no effect. Then I tried the opposite, setting the camera.aspect to the render texture’s output aspect ratio, this worked: my objects no longer looked skewed in the output.
To add light to this, it appears that setting camera.targetTexture to a render texture was automatically changing the aspect ratio of the camera under the hood.
If you need to maintain your aspect ratio for a camera you’re dumping to a texture, you can do this:
Thank you, thank you, thank you! This solved my issue. It’s one of those undocumented subtleties that ends up costing developers time. You saved me a lot of aggravation.
Whoa thanks. I have been struggling with getting my camera to see objects in their shape. Initial tests look promising. Going to play with this some more. Huge thanks!! (I wonder why you can’t do this in the Editor?? As in, resize the aspect using the mouse and click/dragging)
omg thank you so much. i had a predefined rendertexture and @NicholasFrancis reply helped me think that maybe i should do it dynamically and also just in case overwrite the aspect ratio of the camera. i wonder if my solution is dirty or not, but idk it works