camera viewport rect breaks when using camera.SetTargetBuffers ? (works with camera.targetTexture)

I’m improving some portal rendering for my project and need to use the MRT approach for my portal camera.
My portal camera has its viewport rect altered so that it only renders the region where the portal is in view.

When I use camera.SetTargetBuffers the viewport rect that I’ve set for the portal camera is stretched to fill the whole render texture when calling camera.Render()
So the effect is that the region in the viewport rect is being blown up and stretched non-uniformly, which is incorrect.

When using camera.targetTexture it renders correctly, the rendered region occupies the viewport rect at the right location and size, while the rest of the render texture outside of the viewport rect is empty space.

I’m on Unity 4.7 and would like to know if this is a bug that is fixed in 5.0 or if there is a way to get the camera viewport rect working correctly with TargetBuffers.

I think one workaround I can do is to re-scale the rendered viewport rect texture to the right location in the portal’s shader, that might work, currently it is just depending on screen UV’s and the camera’s rect rendering in the right spot to begin with.

edit did manage to get it working by rescaling UV’s when appling the portal render texture, still curious why there is a different between targetTexture & SetTargetBuffers

I encountered the same problem, and issued a bug report.
BTW, I’m using 5.4f3

Hi crazii, I don’t supposed you have heard anything back regarding that bug report, have you? I came across the same issue. It’s a bit of a roadblock for what I’m currently working on.

Nope. I’ve read your thread about rendering part of the texture and I’m shocked. It seems not a BUG when the doc says “not supported”.

You could create one extra small render texture, and each time render scenes to the small one first, then blit it to the large one with correct offset. Anyway I’m using texture array now. You can also use texture array if it fits your needs and your target platform support it. But I still want the view port stuff to be working.

I wonder why Unity has limitations on such a fundamental function. Probably it’ll mess up with existing mechanism or shader variables.

That’s a good workaround for now, but since I’d like to scale how the resolution depending on current performance (and so it can vary frame by frame), it probably won’t be ideal performance-wise. I’d like to just render into a viewport in a RenderTexture that’s already at the maximum size I’ll need, instead of creating a new RenderTexture every time my desired resolution changes. So the workaround might be to have a few RenderTextures of fixed size with the same aspect ratio as the screen, and choose between them as needed.

According to this, it’s fixed for a future release. Hopefully we see this in the next patch!

Thanks, that’s really good news! Cool idea to adapt resolution to runtime performance, and your workaround sounds good.

Thanks! It’s mostly inspired by this 2011 article by Intel, with some more developed techniques to prevent ghosting (using some of the techniques described by the teams behind the gorgeous Uncharted 4, BLOPS 3, and Doom 4 at SIGGRAPH 2016). It leans quite a bit on temporal anti-aliasing, which is not only great for upscaling, but works with super sampling, too, so I can dynamically scale up higher than 100% screen resolution where performance allows.

I’m not going overboard with it, but it already helps some of the slower integrated graphics chips save a few ms and stay closer to 60fps. And it’s easy to get useful results quickly, without going too crazy on the filtering techniques, so I recommend giving it a go.

The fix has gone public, just so you know, crazii :slight_smile: The latest patch has it.

Thanks for the tips! I’m gonna bookmark this page to later use.:slight_smile:

Great! Thanks!