I noticed that my game also works if I set compression to DDS.
Is PVRTC giving better performance or why should I use this format for Android?
DXT quality is A LOT better than PVRTC 2 but of course this last compression method offers quite much higher compacting rate.
But DXT is also practically the same quality of PVRTC 4, only this one takes A LOT of time more to compress textures.
So the only motivation for me to use PVRTC over DXT would be better compression for 2 bit version.
Or, is it better to use PVRTC to grant compatibility between devices?
Also, wouldn’t compatibility be better managed by Unity itself by recompressing textures on the first startup when needed?
There is no or actually as it depends on the hardware
Keep in mind these compressions are hardware compression so if not supported by the hardware they don’t exist and lead to software decompression on load
Uhhh this may explain our loading times… this should be written in big capital letters in the Android options
I’ll check it later on, thanks for info!
Thats the price of the device fragmentation and especially the fact that 4 completely different types of graphics hardware with different origins are in use (SGX → PVRTC, ETC, DXT, Adreno in the Qualcomm → ATITC, TI has an own one but not sure what they support on the compression front, Tegra 2 → DXT, ETC)
It will take another few years until you only deal with OpenGL ES 2.1 and newer devices before you will get rid of this annoying problem as ES 2.1 finally specifies an own compression format. Its a shame and laughable that they take year to learn from something Microsoft has shown them far over a decade ago with DirectX7 and DXT already.
Also PVRTC is and will remain the most solid one (better quality per bit commonly), but no wonder as S3TC (wider known as DXT) and PVRTC are both developed by Imagination Technology. S3TC originally for the original PowerVR hardware and PVRTC for the “next generation PowerVR hardware” which we know as SGX in iOS devices, the Galaxy S etc
Thank you for these infos… any link to have some stat on diffusion of these different compression standards?
I would like to know which one is the most used ACTUALLY and turn all my texture to that format.
The current deal is this: use ETC1 for non-alpha textures - it’s supported natively by all GLES 2.0 platforms. For textures with alpha-channel the default “compression” method in Unity is to use RGBA16, as that is the best tradeoff between package distribution size, loading speed, and rendering performance.
If you know that you are only targeting a specific hardware range (like, only Tegra-based devices) you can choose the native compression format of that hardware. If the selected compression format is not available in hardware at runtime, Unity will decompress the texture to RGBA32, which then leads to penalties in rendering performance and memory consumption, as well as a small cost in load time (as it needs to decompress the texture).
That information is in fact also in the documentation; ETC as Recommended Texture Compression.
There is a trick to use 2 ETC1 textures and combine them in the shader to get uniformly supported compressed texture fetches at runtime (ie no decompression of textures).
To do that you split the RGBA texture into an RGB-texture and an alpha-texture, which is an RGB-texture with Alpha in G-channel, and R- and B-channels unused. Then you create a custom shader where the simple texture fetch is replaced by two; one to the first texture for RGB, and one to the second for A, like this:
IIRC, ETC represents texels by using sets of 2x4 (or 4x2) texels where each set is generated by taking a base colour and adding on amounts of grey.
Assuming I’m correct, rather than leaving the R&B channels unused, mightn’t it be better to represent the alpha texture by setting all R, G B components to be the same?
Of course, this does mean that you need 8bpp for your overall texture whereas, for devices that do support PVRTC, you might be able to use 4bpp (or maybe even 2bpp)
What if I chose a base colour of green and multiply it by amounts of grey? Would that be like you choosing a base colour of white and multiplying it by amounts of grey (luminance)?
I’ll tell you why green was a better choice in DXT, not sure if that holds for ETC. In DXT colours are represented 565, if you chose all channels to represent a luminance, you risk your slightly less precise r and b channels muddying your g channel.
ETC doesn’t work that way. It sort of makes the same sort of assumption that is used in YUV 4:2:0 video in that in (natural) images, each local neighbourhood of pixels has basically the same chroma and mainly just varies in brightness. If you want to look at the specifics of ETC it’s described in the paper “iPACKMAN: High-Quality, Low-Complexity Texture
Compression for Mobile Phones” by Strom and Akenine-Moller.
FWIW I just did a quick test with a grey scale version of “Lena”, i.e. all channels are set to be identical. I compressed that with ETC and got an RMS error of 4.14.
On the other, if you then set the R B channels to zero, compress that, and (copying the compressed G results back into R B so that the same error metric can be used) you get a much higher error of 6.58.
It’s OK, I’m quite familiar with DXTC/S3TC - I’ve been working, on and off, in texture and image compression schemes for many years now. ETC, although still a “block-based” texture compression system, uses a different scheme to S3TC. Of course, PVRTC is entirely different again.
Sorry Simon, pom originally, that doesn’t mean we can’t nut this out over a beer though
Hmmm, that is interesting, thanks for taking the time to check it out. I’d be interested in the error metric in the green channel alone, i.e. setting the r and b to zero in the grey scale Lena after decompressing. That would be closer to what I’m trying to achieve as we would just be sampling the green channel in both cases.
OK, using the internal comparison tool we have at here at work (IMG), the RMS difference in just the green channel is 2.39 if you use grey scale (i.e. set all channels to be equal before comparison) and 3.79 if you set R and B to be zero, which is as I expected. In other words, don’t use the same practice as you would for DXTC.
Yes, it only appears to give you statistics (using the little graph icon) if you have done the compression in the tool itself. I should have a word with the devtech guys here at work. The AMD compressenator is supposed to do comparisons but, for some reason, crashed when I tried it.
Beware. I found that PVRTextool didn’t produce sensible stats if you were displaying the differences between the original and compressed data. Turn off the difference display and they should be calculated correctly.
Can someone explain how to put the alpha value into the G channel? Ive been googling and poling around in photoshop for a week now and can’t figure it out.
Then we found it was much, much easier to write a script in unity to do it.
We hacked up an editor script, something along the lines of
For each texture in the assets.
Convert to 32bit and readable
Read the pixels
Create 2 new textures, copy alpha to one and color to the other.
Convert all the textures back to Automatic.Compressed
Save out
Then we have a separate editor script that goes through all the alpha blended materials and swaps their shader and and their textures to reference the new ones
You can definitely do it in gimp - I just have to try and remember how …
If you select Colours->Components->decompose, then choose RGBA as the colour model, you can opt to get either get 4 separate monochromatic images or a single image with 4 monochrome layers. If we assume the former, you can then copy the alpha data.
Then go to your destination image and select Windows->Dockable Dialogues->Channels. Click on the right hand side of the various listed channels so that the ones you want to change are highlighted in blue, and then just paste away. (Note, if you click on the left hand side, you can toggle the display of the various channels. This is independent of the which ones you are editing).