Hi,
The best format for our game is RGBA5551. Is there any way to import these sprite sheets at the same size/memory usage? Unity doesn’t seem to support the format. Can it be done manually?
Hi,
The best format for our game is RGBA5551. Is there any way to import these sprite sheets at the same size/memory usage? Unity doesn’t seem to support the format. Can it be done manually?
I’m not sure if this is still the case but you used to be able to convert textures manually using PvrTexTool to the iOS PVR format of your choice (e.g. RGBA5551) and then just use them directly in Unity. That way you would guarantee it’s the right format.
Unfortunately it only works for iOS. I suspect the reason they don’t give you the option in Unity is that not all graphics cards support RGBA5551 - it’s a fairly old-style format nowadays.
Thanks for the response! I see, that’s a shame. It was a big reduction on our sprites with no loss in quality. Will have to look into the next best option.
So, it sounds like you want to use only 2 bytes per pixel, with a 1-bit alpha channel. If you are concerned about memory use and not quite so much about a little extra performance needed to make the gain, maybe you can consider converting your graphics to an indexed color palette format.
e.g. you can use one byte to say which Row
and one byte to say which Column
to use as a lookup into a 2d color palette texture. This offers 65536 colors, and you can also then swap to a different palette to re-color your sprites and thus re-use the data even more. You could store your sprites as 2 alpha8 textures thus. Not sure about the alpha channel though, I guess in a custom shader (which would be needed to do the lookup, and it doesn’t support bilinear filtering well), you could use 1 bit for alpha, thus 256x128 palette texture size.
This way you’ll use almost the same amount of memory (besides needing some extra for the palette texture(s)), and also get 24-bit color resolution and the same number of unique colors.
Thanks for the info. Will look into it. Do you know what type of performance hit could be expected?
Mainly if you add up how many color components have to be read in total, it’s probably 5 or 6 compared to 2 for 16-bit color, but also the palette would be cached fairly well.