texture compression and memory profiler... oddities

I switched about 200MB of texture to crunch 50% normal, the inspector shows a reduction of 80%
I capture a few frames of memory profiler from a build. Textures are the same 16MB whereas in the inspector they show as 1.9MB
What… uh… what’s going on?

Crunch compression is only for reducing build size. In video memory they uncompress to regular DXT.

2 Likes

thanks, good to know
so all gets decompressed on launch? or just as needed?

BTW, this table might help choose a fitting compression format based on the platform and desired factor you are trying to optimize for.

1 Like

yeah I wish the doc was better on that purpose, it’s just too vague and in the wrong format
terms like “medium” or “low” don’t belong in a spreadsheet about quantities, instead it needs to use numbers such as perceptual loss %
also “variable” as bitrate for crunch gives zero information, instead it should be the bitrate range on disk and in ram and in vram

Fair. Not sure how mathematically viable a range would be for the variable Bitrate as I’m not firm enough in the algos. As for perceived quality loss, I’m not too sure if there are objective, universally accepted quantifiable measurements that would apply across them either. I have to defer that to the graphics people.

I can only say that the size resulting from a variable Bitrate compression will ultimately show via the memory Profiler with a snapshot taken from the target device.

1 Like

It’s listed as “variable” because it depends on the content and the compression quality, and can’t really be known until after it’s been crunch compressed. You could have crunched textures be close to 80% of the original un-crunched image (for something like random noise at max quality), or you can have it be <5% of the original file size (for a single color texture at min quality), and anything in between.

2 Likes

No idea either. Histogram delta is how I do it in photoshop when a spreadsheet must be filled and time can be burnt. @bgolus has a good point, various type of image respond differently to compression so it might need a few ranges. “variable” can be replaced by “80% @ 10% loss on noise”. Then yeah the doc needs to describe how loss is measured and attach the images selected to allow artist to eyeball the corresponding type. Definitely not an easy task and probably will end up looking like an academic paper. I don’t see that happening :smile:

Cool. uh to make sure, ultimately = eventually?

But that’s still kind of meaningless. The best way to know how well crunch works is by trying it and playing with the slider. What amount of loss is acceptable is subjective, and the usual metrics (PSNR, or peak signal to noise ratio and RMSE, root mean squared error) that academics use are well known for not being great at actually predicting how “bad” something looks to a human. A good example is there are a handful of deep learning image compression techniques that are terrible from a pure RMSE / PSNR metric, but look amazing subjectively because it captures or recreates elements of the image that human vision is actually good at noticing better than a lot of older compression techniques that focus on perfect reproduction of a source image.

And there are plenty of examples online of the difference between a “reference” DXT1 image and the crunched version … and generally you can’t tell them apart. Whether or not that’ll be the case for your own images depends on … your own images.

2 Likes

The size already shows up in there. It’s just missing additional meta info and if it’s RAM, VRAM or both (relevant for read-write enabled textures).

1 Like