Auto MipMap generated on HW

Hi !

For many years I have been working with 3D engines and one very important feature I miss is the capability to generate mipmaps on the HW. ( See the GL_GENERATE_MIPMAP_SGIS and GL_GENERATE_MIPMAP extensions in OpenGL)

Is this supported in Unity or what are your thoughts on this.

E.g. if you are using compressed textures and feed compressed texturedata into the texture, it is almost impossible to get these textures to show decent quality if not the auto gen of mipmaps are active.

Just to let you know: when you look at a flat surface from an acute angle, the texture starts to blur using a lower (or smaller) mipmap level. Also if the surface is close to the camera.

Automatic, hardware generated mipmaps is enabled by default for all dynamically created uncompressed textures.

For a Texture2D, if the texture has mipmaps enabled (controlled by the Texture2D constructor), calling tex.Apply() on them to upload the texture to the GPU automatically generates the mipmaps on the hardware. Texture2DArray, Texture3D, and CubeMap all have similar setups.

For a RenderTexture, this is set by if mipmaps and auto generation are both enabled.

Compressed textures are different though. GPUs hardware cannot generate mipmaps for compressed textures since GPUs don’t have built in acceleration for re-compressing those textures and all mips must be using the same format as the top mip. Plus you really don’t want to be generating mips from the compressed image, only from the uncompressed source, otherwise any compression artifacts will be amplified. Textures that are imported by Unity in the editor, or when calling tex.Compress() will generate the mipmaps on the CPU as part of the import and compression rather than on the GPU to ensure the CPU side has access to those mipmaps for storage & optional script level access.

I use LoadRawTextureData and would expect the texture to be capable of generating the appropriate textures on the HW. Is there a performant way to do this ?

GPU can generate mipmaps from compressed textures. Its a very nice and efficient way to upload just the original compressed texture and let the driver or the GPU create the mipmaps on the fly

By converting it to an uncompressed texture first, yes. GPUs lack the hardware to generate compressed mip maps. LoadRawTextureData assumes the data being loaded includes the necessary mip maps.

I use compressed textures in my application and send just the compressed texture data to the driver and enables mipmaps. If the driver creates the mipmaps on the gpu or not is up to the drivers and GPU vendors but you can do that.

As well you can send texture data that doesnt contain mipmap data to the driver and let the driver generate the mipmaps.

I also assume that even if the driver needs to do it in CPU it is still performant compared to my code if i need to do it in my code

Depends on the device / hardware.

AFAIK if you try to upload a compressed texture and ask for automatic mipmap generation on a mobile device you’ll either get an error, black mipmaps, or an uncompressed texture. Something like ASTC takes a beefy desktop PC several seconds to minutes(!) to compress, so there’s no way a mobile GPU can do it. I’ve heard some Mali devices will accept ETC1 textures, but that’s a fairly simply format so it’s plausible to do real time compression of those on a mobile device (but it’s still going to be pretty slow).

On Desktop, AMD & Nvidia will certainly accept DXT1/5 textures. For Direct3D the spec actually defines what formats can be used, and any compressed textures are supposed to fail silently(!?), though I suspect it’ll work there too for DXT1 & DXT5, at least for non-mobile Direct3D.

But yes, this is happening fully on the CPU, not the GPU. Is it going to be faster than what you can generate in Unity? Probably. But also probably not that much faster.

// load DXT1 texture w/o mip maps
Texture2D temp = new Texture2D(resX, resY, TextureFormat.DXT1, false);
temp.LoadRawTextureData(data);

// create new texture with same resolution
Texture2D tex = new Texture2D(resX, resY, TextureFormat.RGB24, true);

// copy raw pixel data into the first mip level
tex.SetPixels(temp.GetPixels(0), 0);

// generate mip maps
tex.Apply();

// compress into DXT1
tex.Compress();

// copy top mip level from original texture
Graphics.CopyTexture(temp, tex);

// unload temp texture
Destroy(temp);

However, if you’re generating compressed texture data already, why not generate the appropriate mip maps for it too at the same time? Most tools that generate compressed texture data are going to do that by default, and that’ll be way, way faster than either “automatic” mipmap generation option, and much higher quality.

TLDR; Desktop GPUs might support automatic mipmapping of compressed textures, but it’s technically outside of the API spec to do so. You should be supplying your own mipmaps instead.

GL_SGIS_generate_mipmap is a common extension that allows the HW to uncompress and create mipmap on HW
GL_EXT_texture_compression_s3tc allows direct streaming of DXT textures down to HW

Combination of both allows for fast streaming of just raw dxt texture down to HW and you get good quality with mipmaps.

Just a comment. I have never looked for the ability to generate compressed textures. Just use them with high quality.

So basically i think unity API could use these if available and then have a fallback like your resampling.

I tested the DXT on oculus quest and it fallbacks to a uncompress in unity but without mipmaps generated

When it comes to supplying mipmaps it increases the stream load with a factor of 2 so that isnt something you want. In that case its better to take the hit to generate the mipmaps in SW when they should be use. Added that to my native part now so i feed LoadRawTextureData with mipmaps and all but its still a drawback that we cannot use the hw to do this.

Added compression to my native code and it looks like Unity can take all mipmaps in compressed format using the LoadRawTextureData(data); for packed compressed mipmap data.

Thanx for your response bgolus !

Right now i just need some function to know when the HW doesnt support various compressions

1 Like

A link to the results right now….

Not really, no.
GL_SGIS_generate_mipmaps is explicitly only for handling uncompressed textures.
GL_EXT_texture_compression_s3tc is explicitly only for loading and decompressing DXTC format textures of the types DXT1, DXT3, and DXT5.

That’s it.

There’s no interaction between the two extensions in their spec, nor any comments about the later allowing for runtime compression to those formats. The fact that desktop drivers do allow for automatic mipmap generation and runtime recompression of DXTC textures is outside of the spec. It should also take more time to generate those mips and recompress than to just load precomputed mipmaps unless you’re on a slow network connection. So plenty of devices out there can accurately say they have support for both extensions, but will throw an exception if you try to tell it to generate mipmaps for a compressed texture.

This will include all mobile hardware, since none of them have the computation power to encode to DXTC formats in real time. (Well, maybe the latest Apple hardware, but they, or really most mobile GPUs, don’t support DXTC formats.)

For the Quest you should be using ASTC, or ETC1 textures. ETC1 textures are small (6:1 compression vs a 24bit RGB source), but have more obvious compression artifacts. ASTC has variable block sizes, with a ASTC 6x6 looking better, and having an overall higher compression ratio than ETC1 (roughly 6.75:1 for 24bit RGB, nearly 9:1 for 32bit RGBA!), but may potentially be slightly slower to render due to the additional cache usage.

Honestly what you probably want to look at is something like Basis. That’s a single highly compressed image format that directly and quickly decompresses into just about any GPU compression format. It’s less half the size of a ETC1 and is built to decompress very fast, meaning you can load it from disk with mips and get an ETC1 or DXT1 texture to the GPU faster than you can load an ETC1 or DXT1 texture without mips.
https://github.com/BinomialLLC/basis_universal

1 Like

I will not argue about the extensions with you even if I think you are wrong. But the rest was really good info. The DXT1 has 1:6 compression rate so if I get you right Basis has 1:12.

In my case I don’t load textures from disk like files but streams from servers in binary compressed formats so the amount of data is really important. As well as the quality of course.

Technically Basis isn’t a fixed compression ratio like GPU compression formats. It’s more like jpg in that it depends on the content, but it’s essentially guaranteed to be smaller than any GPU format you’d be transcoding into.