To reduce the number of draw calls dramatically in my project, I want to use a camera that only renders the scene when needed.
I have two cameras in my scene, a ‘CachingCamera’ and a ‘CompositeCamera’.
The CachingCamera only occasionally renders some background objects to a RenderTexture.
The CompositeCamera first renders the RenderTexture of the CachingCamera, and then some dynamic objects.
Layers are used to assign objects to the CachingCamera and the CompositeCamera.
Now, when I want to draw the texture from the CachingCamera in CompositeCamera.OnPreRender(), I get the following error:
Error assigning 2D rectangle texture to 2D texture property '_MainTex': Dimensions must match
This is the code for the CachingCamera:
public class CachingCamera : MonoBehaviour {
private RenderTexture cachedRenderTexture = new RenderTexture(Screen.width, Screen.height, 24);
private bool rendering = true;
public void Awake()
{
cachedRenderTexture.Create();
}
public void Start ()
{
Camera camera = (Camera)gameObject.GetComponent(typeof(Camera));
camera.targetTexture = cachedRenderTexture;
}
public void SetRendering(bool rendering)
{
this.rendering = rendering;
Camera camera = (Camera)gameObject.GetComponent(typeof(Camera));
camera.enabled = rendering;
}
public bool IsRendering()
{
return rendering;
}
public Texture GetCachedTexture()
{
return cachedRenderTexture;
}
}
And this is the code for the CompositeCamera:
public class CompositeCamera : MonoBehaviour {
private CachingCamera backgroundCamera;
public void Start ()
{
backgroundCamera = (CachingCamera)GameObject.Find("BackgroundCamera").GetComponent(typeof(MonoBehaviour));
}
public void OnPreRender()
{
Texture backgroundTexture = backgroundCamera.GetCachedTexture();
Graphics.DrawTexture( new Rect(0.0f, 0.0f, backgroundTexture.width, backgroundTexture.height),
backgroundTexture);
}
}
After searching the forums for clues, I tried to use powers of two when creating and when rendering the RenderTexture, but that resulted in the same error.
Anyone has a clue what I could be doing wrong here?
I’m not entirely sure if this is possible on the iphone, and I don’t know a solution to your problem, but I do think you will give the GC a lot of work by creating textures on the fly constantly…
In this particular case, the texture is only rendered once at the beginning of a level…
This would be a huge optimization in terms of draw calls; all background 3D objects can be rendered in one draw call if they are pre-rendered into a texture.
Oh. In that case, maybe it’s possible to achieve this using a shader? Anyone has a clue?
If anyone knows an other way to optimize the 3d background draw calls (apart from batching), that would be very welcome off course!
Note that the background can vary depending on how the level is tuned, so manually pre-rendering it into texture files is not an option.
Render to texture is not supported on the current Unity iPhone. If you want to do this before level start why not doight it outside your game? Your loading will be faster, here is the tip:
Render your background scene or just take a picture in your editor of your objects and fake it to plane, skybox or what ever you need.
Cheers,
Pre-baking will eat lots of memory, and its obviously substantially less dynamic. Loading the textures dynamically also will basically cripple performance.
Until there’s render-to-texture ability, your idea makes a great whitepaper
Haven’t really given this any thought, but what about coding this up manually by writing pixels to a texture yourself? Would be pretty slow, but iPhone can deal with texture creation on the fly via Texture2D.SetPixel
Just put the camera in the right spot, sample pixels into a texture, and then use it. Could it be that easy?
deadly slow meets it best, from what I recall if you go for a 256x256 or larger you can easily kill your performance to below 20fps by writting a few times per second
you can do that if you do the whole rendering through cpu and only render to screen for “flipping”
Ah I thought that doing the texture write in a single shot on level load was enough. Instead it’s similar to the “imposter” method usually used for trees it sounds.
as a single time operation it should performance wise be less of a problem, you will just have to cope with the VRAM as writable textures automatically also means not compressed
In the case where I want to do this, it only has to be done once every time a level is loaded. So even if it would take a second or two, that would be no problem.
Texture2D.SetPixel is bringing me a step closer to a solution, but now the problem is: how do I read a pixel from the screen? I’m afraid this may not be possible with Unity iPhone…