hi all,
I’m working on a donkey kong, LCD style game in Unity that will be published for iPad. It’s been going great but as I get to the end I’m starting to notice some lag so I am looking for ways to optimise it.
You can see an small section of the game here: http://www.lcdemakes.com/transparentexample.png
The colour is a flat background image (png), but the other parts are made up of multiple transparent pngs. So in that pic there is the
chest
enemy
hero body
hero attack arms
//etc
There are hundreds in the game, and they get turned on and off to create the frame by frame animations. Just to be clear, the images transparent pngs - solid black for the shapes and then fully transparent - no gradients or anything.
I’ve tried a range of shaders for the sprites but this is the latest
Shader "Sprite" {
Properties {
_MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
}
SubShader {
Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
// LOD 100
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Lighting Off
Pass {
SetTexture [_MainTex] { combine texture }
}
}
}
Forum member Jessy suggested the shader should be multiplicative but whenever I add that blending to any of my shaders the images turn to solid black and I don’t know enough about shaders to determine the problem.
So, question 1: what would be the most efficient shader to use for this type of sprite? Am I doing it completely the wrong way?
Question 2: given that nothing actually moves in my scene (things just get turned on and off), would there be anything to be gained by setting everything to static? It seemed logical to me but I tried it and it didn’t seem to help.
Question 3: each image is retina res, scaled down to 50% in the scene. We did this so they didn’t have to be scaled over 100% on retina but is this the right approach?
Question 4: I have around 40 sfx in the game. All short, and most played regularly. At the moment I have a sound controller that attaches 40 audio source components and assigns an audio clip to each one, ready to be played. This made it easy to set up their volume and would allow for different combinations of sounds to be played at once, which happens constantly in this sort of game. Is this a bad way to do it? The sounds are a mixture of aiff and mp3 - is one much better than the other when publishing to iPad? The aiffs are all set to compressed. Commonly played sounds are set to decompress on load, less common sounds are compressed in memory.
Question 5: Aside from the main game screen, the scene is made up of a grid of 9 (3x3) big planes, each with a 2048x1536px jpg on it. We did this so that when buttons are clicked (like instructions) we can just zoom the camera over to that part of the grid rather than load a new scene. It looks nice but this seems to be chewing up a bit of memory as well and I often get memory warnings during the pan. Is there a more efficient way to do this effect? Is this likely to impact of the smoothness of the game if these planes are all off camera while the game is being played?
Anyway sorry about asking so many questions. This is my first proper iOS game made in Unity and I haven’t being able to get very far with the profiling tools in Unity and xcode.
Thanks in advance
Bill