Feedback or comments needed for texture formats

In Unity nowadays, we have TextureFormat and RenderTextureFormat, also we have TextureImporterFormat. I have some questions about how users use them. So I have a few questions:

  1. What is your method of choosing a format to use?

  2. Do you struggle with choosing a correct format?
    Correct means the format fits perfectly with your needs, it is supported on your platforms etc.

  3. Does the current API help you to make the decision? Do you rely on the APIs e.g. SystemInfo.SupportsRenderTextureFormat?

  4. Any other difficulties you have when you work with texture formats?

I’m very new to Unity, but looking at the links I was surprised EAC formats are available given I wasn’t seeing them in the editor for texture compression. Not having much experience with Unity I’m going to answer based off my gut…

  1. I would imagine that these being enums common formats would have the same underlying value, but I would stick with using the appropriate enum for the use case (TextureFormat for a straight texture my shaders are going to sample from, RenderTextureFormat for a texture I intend to render to, and presumably TextureImportFormat would be formats the texture importer knows how to load?)

  2. It seems like Unity has made certain choices forcing my hand to not try and target multiple fallback cases (ie. multiple compression methods, linear and gamma) It seems like I’d try and author maps / combine channels outside of Unity, and then import and use texture compression and see if the result looks acceptable.

  3. For targetting say GLES3.x I’m probably just going to use what the spec says is more or less guaranteed (I believe Unity has a software decompressor anyway and falls back on uncompressed textures if unsupported?), but as far as the API helping I’d imagine it ensures I use the correct enum class (or at least makes me feel naughty casting one to the other possibly)

  4. there’s a current thread somewhere here about a person finding YUV support ceased working, I’ll see if I can link it EDIT: TextureFormat.YUY2 (21) is not supported on Android platform

1 Like

Regarding a render texture format:

  1. We choose the main render texture format based on the GPU vendor. Nvidia Tegras generate a lot of banding and require 32bit format, while other vendors seem to be ok wit the 565 format.

  2. It needs some hardcoding, but not too much struggle

  3. We use SystemInfo.SupportsRenderTextureFormat just to make sure that the target device really supports 565, so we can roll back to 32bits if it doesn’t

  4. Not exactly related to texture formats, but it would be great to be able to change the “use 32 bit display buffer” on the fly on the device, and not only before the android build

1 Like

I use a lot of different formats to try to optimize texture use. I put a lot of data into textures for shader work, so uncompressed formats are common, with varying channels or precision amounts. We also optimize for texture size by choosing explicit compression formats (2bpp/4bpp on mobile, etc)

And I run into a lot of bugs in Unity with texture formats:

  • HDR formats getting clamped to LDR ranges on mobile
  • HDR formats getting written to disk as Gamma when set to linear, and thus not coming back as the same data they were written in
  • Alpha8 Sprites being removed from the Texture Importing UI in recent versions of Unity because SingeChannel is now in the same enum as Sprite. We use Alpha8 Sprites for most of our UI, which is all based on SDF and MSDF techniques.
  • Not being able to write all formats to disk
  • ARGB or RGBA - RGBA throws errors when you Get/Set Pixels and tells you to use ARGB, but ARGB is no longer an import format selectable from the UI (but still from code). I write some stuff that has to work across multiple versions of Unity, and nearly every new release means new #ifdef’s to fix changes to the texture importer pipeline.
  • TextureArrays seem to get serialized into YAML when saved as asset files and force ASCII is on, which creates slow load times. There’s also no inspector for Texture Arrays assets (you can write your own, but I can’t release it cause it will likely conflict with one if anyone else does).
  • Documentation is very incomplete about what each formats expected capability is (is this scalar, signed, unsigned, which platforms does it work on with what caveats?)

We also script an extensive set of importer scripts to automate the process for our less technically minded coworkers.

Yes, between the bugs, lack of documentation, and undocumented differences in how formats are treated on different devices it can be quite time consuming having to test a new format before you can OK it’s use. Especially if that test requires building asset bundles and deploying to 10+ different devices and platforms because you can’t trust that it will actually work in all cases it’s supposed to.

No, because the format might be supported, but that doesn’t mean it’s going to work correctly. If you put a bunch of data into an RGBAHalf texture it will work fine on PC, but get truncated to 0-1 values on devices that claim they support the format. You can’t rely on the API or the docs to tell you what is going to work.

Yes, I want better control of what happens to the texture data throughout the pipeline. Lets say I have a normal map which has been cast from a high res to low res model. Often, the bulk of the deviation between these two models is subtle and exists entirely within the 0.4 to 0.6 range of the texture. Normal baking pipelines output a 16bpc image, but Unity quantizes this down to 8 bits, so now we have 8 bits of information representing values that are mostly within 20% of the range. By the time this hits the compressor, the subtle gradients have been squashed to nothing, and your specular response looks crunchy.

What I would love to be able to do is modify this data before it goes into the compressor and before it gets quantized. Basically, normalize it to use the full 0-1 range through a method of my choosing and store off some data associated with the texture about that normalization, then send it to the compressor, and in the shader, un-normalize it, allowing me to get much higher quality normals with the same amount of storage and only a MULADD in the shader.

I would also like Unity to compress textures off the main thread, so it doesn’t tie up my machine importing a project for an hour or 18 (One of our games takes 18+ hours to compress texture into android format if the cache server cannot be used).

I want the “Can I fix it for you?” thing removed from normal maps. I often pack extra data into things which look like normal maps, and have written my own texture controls to prevent the one from showing up in the editor- but if you use the “bump” texture in your property definition, this check will still be triggered. A “grey” default texture value with linear 0.5, 0.5, 0.5, 1 would also be very useful.

I would like to be able to generate my own mips easier. Right now we do it by hand and put it into a DDS file.

I would like texture type and format to be better separated in the importer UI. Right now, Sprite, Normal Map, Single Channel, etc all exist in the same enum. I blame normal map for this, since it’s both a choice of texture format and how to process the texture. Now that enum is all messed up. Anyway, this means you can’t have a sprite which is a single channel because it’s either sprite or single channel. Will single channel be R8 or A8 on my device? Who knows. If I sample .a in the shader, will that work with an R8 Texture? Only way to find out is to test a bunch of different devices, or not use those options.

Oh, and everything for managing texture memory is completely outdated in Unity - it treats platforms as if they are a single device, but an android device could have 512mb or 4gb of ram.

On the flip side of this, regular users are in my experience overwhelmed by the complexity of texture options. Just linear/sRGB is too much for most people to understand - so I get that the challenge here is not an easy one.

13 Likes

We use PNG with the mipmaps side by side and combine them on import. Is a fairly quick operation and allows you to change texture settings that you are unable to change in DDS files.

On all other points, +1. Running into similar things here.

1 Like

You just defined the problem I’ve been pondering on for over a week. By chance do you know a way to enforce different texture compression formats for different devices on either iOS or Android? For instance, my game crashes due to low memory in an iPhone5s (1gb memory) but runs ok in 2gb devices. So I packed everything using PVRTC 4bit but then some textures are hard to stomach. What I would like is, keep the PVRTC 4bit format for low-end devices while changing the compression to better quality for devices with 2+gbs of ram (say, ETC2 8bit maybe). I tried using onpreprocesstexture() to no avail (it is said that editor folder is stripped in builds, so it’s not working on mobile builds). Any info would be much appreciated.

1 Like

Use asset bundles and deliver separate manifests to each device. We do this in a complex build process, where we build several different versions of the asset bundles for each level of device.

1 Like

I was afraid you’d say that :slight_smile: We use a similar complex pipeline for assetbundles. Needless to say they have their own drawbacks. Thank you for the answer though.

1 Like

1. What is your method of choosing a format to use?
We’re currently moving a research project to Unity (from a custom OpenGL rendering engine). The project relies a lot on 3d textures and voxel data. The formats we use are a compromise between image quality and memory requirements.

2. Do you struggle with choosing a correct format?
Yes. For instance with a 3d texture of size 512x512x512, a single byte per pixel is already 135 MB. Therefore, we have to be careful to use the minimal amount of bytes per pixel.
For some of our data, we would like to use a single unsigned integer channel (i.e. GL_R16UI), but this does not seem to be supported in Unity. There is a RGBAUShort render texture format, but no corresponding texture format. And there is no RUShort variant at all.

3. Does the current API help you to make the decision? Do you rely on the APIs
We don’t need to make these decisions at runtime.

4. Any other difficulties you have when you work with texture formats?
Loading (3d) textures at runtime is quite cumbersome currently. LoadRawTextureData is only implemented on Texture2D, but not on Texture3D or RenderTexture. Because of this, loading a 3d texture from disk at runtime is more difficult than in should be. You either have to use native rendering plugins, just for loading a texture or try hacking a solution using compute shaders.

It would be nice to have access to all available texture formats, even if it means losing some platform or graphics library compatibility. Some (research) projects don’t need to be compatible with every platform, but they need to have better access to the platform they are built for.

It would also be very welcome to have a LoadRawTextureData for Texture3D and RenderTexture as well. Thanks!

3 Likes

(Just in case you want to have an early idea about what we are doing)

In 2018.2 beta, we already have GraphicsFormat enum which exposes all the available formats. It also comes with a bunch of useful functions for you to check the details of each GraphicsFormat. Take a look at the API here:

GraphicsFormat Enum

GraphicsFormatUtility
https://github.com/Unity-Technologies/UnityCsReference/blob/master/Runtime/Export/GraphicsFormatUtility.bindings.cs

Of course these don’t solve all the problems we have. So we are still working hard on it. Feel free to raise questions.

This looks very promising! Thanks for letting us know.

One issue I was dealing with recently related to texture formats was a mismatch between the format of the texture and the format of the data I was trying to upload (see feedback). Currently LoadRawTextureData requires the formats to match, but this isn’t required (by all) graphics APIs, certainly not OpenGL. Instead, you can specify the format of your data when you call glTexSubImage3D.

In the end, I wrote a native plugin that I can pass a pointer to my raw data to, which then calls glTexSubImage3D. For this, I used a custom enum of formats, similar to the new GraphicsFormat enum (although not as extensive) that I could use to specify the actual format of the data (e.g. GL_R, GL_UNSIGNED_SHORT).

On a related note, glTexSubImage3D also takes width, height, depth and offsetX/Y/Z parameters which you can use to upload only a part of a texture at once, e.g. a single slice of a 3D texture. This is especially useful for bigger 3D textures, that you cannot load at once, without blocking the main thread for seconds (big issue in VR). As far as I know, this can only be fixed with native plugins currently.

Does Graphics.ConvertTexture() helps?

I don’t think it does currently. Maybe with the graphics format refactor it will?

For example, I have a binary pre-generated 3D texture of size 512 x 446 x 459 with a single unsigned short channel. I couldn’t figure out a way to load this data into a texture (efficiently) without native plugins. As far as I can tell, I cannot create a Texture3D or RenderTexture of this format (RUShort). Will this be possible with the new graphics formats?

Currently I create a 3D RenderTexture of type RFloat and use gltexsubimage3d to load the “RUShort” data into it (10 slices per frame to avoid frame drops).

I was just looking at this. It would be useful if GraphicsFormatUtility could return the texel size in bytes, when that makes sense. However, none of this is much good if the Unity APIs cannot create any hardware supported texture formats outside of the current set. For example, R8_UINT would be useful to me, and is required by DX11.

Relatedly, the inability to set UAV bind flags without also setting render target bind flags is a performance problem for 3D textures (e.g. LUTs, SDFs). I made a feature request about this.

Hey!
I’m not sure if it’s a bug, but using textures as single channel either

  • doesn’t work as intended or isn’t useful (more RAM usage than RGB) or
  • doesn’t seem intuitive and doesn’t seem to have good documentation

So I filed a bug a long time ago on Unity incorrectly serializing HDR textures by gamma correcting them when they are linear and the bug was resolved as wont fix because they decided someone might be relying on the bad behavior. But for the life of me I can’t figure out how anyone could be relying on a format that doesn’t load the same data you just saved, and to such an extent that it visibly doesn’t look like what you saved at all. The reality is you just can’t rely on it at all.

2 Likes

I’m facing the same issue and your native plugin sounds like a good way to solve this. Do you mind sharing it?

Hello, I figured this is the best place to request this feature, but I’m not certain on the logistics of it. Would it be possible to include an RGBA RenderTexture format where the alpha channel is an 8 bit depth value? Essentially an RGBD format? I understand it would be low resolution, but for my needs I’m having to generate a depth texture and an RGBA texture, but I’m not using the alpha of the RGBA, and I don’t require the higher resolution of a 16 or 24 bit depth value.

If its the case that the depth texture must first be generated before it has any meaning than so be it. but i figured If it was possible I may as well ask.

Not sure if this is the right place

But please can we have GPU Accelerated texture compression?

Godot it going to implement this and I think Unity should also invest in this area, Compressing textures is way slow.