I want to preface this by saying I’m very new to game development.
I’m trying to make a scene where the character is a sprite and the environment is 3d rendered at a low resolution to match the sprite resolution. Currently, the 3d world is rendered by one camera to a render texture, and the sprite is rendered by a separate camera to another layer. I’m trying to combine the two such that the sprite is rendered over the render texture. But try as I may, the sprite is always rendered behind. I have tried changing the depth values of each camera to no avail.
What am I missing?
Is there a better way of approaching this?
Three (3) ways that Unity draws / stacks / sorts / layers / overlays stuff:
In short,
The default 3D Renderers draw stuff according to Z depth - distance from camera.
SpriteRenderers draw according to their Sorting Layer and Sorting Depth properties
UI Canvas Renderers draw in linear transform sequence, like a stack of papers
If you find that you need to mix and match items using these different ways of rendering, and have them appear in ways they are not initially designed for, you need to:
identify what you are using
search online for the combination of things you are doing and how to to achieve what you want.