Marking textures not readable without calling Apply

Morning,

I have a problem, I merge some meshes together at runtime and merge their material using a custom shader that accepts TextureArrays. I new these arrays manually, and efficiently .Blit/CopyTexture the contents across on the GPU.
Since they’re created dynamically they’re readable, but I don’t need them to be. Calling Apply(false, true); on the texture array to achieve this however

a) yeets my efficient Blits and replaces them with white pixels instead
b) takes 100+ms

Is there a way I could mark them non-readable without the above side effects? I’d like to release memory since I won’t modify those beyond initial setup.

Call Apply() immediately after creating it? Probably won’t avoid the 100+ms hitch, but won’t override the data.

If you’re dealing with uncompressed texture formats, you could create a render texture 2Darray which never uses any memory on the CPU side.

If you’re on Unity 2020.2 you could create a dummy Texture2DArray as a “blank” (all black) asset with the resolution, formant, and enough layers to cover your common cases.

1 Like

I was thinking of applying first, but this code is required to eventually run in VR, so I can’t afford a stall that long unfortunately. Would it be impossible to expose appropriate functionality to the user in further releases?

I noticed in unity’s source on github we currently just call ApplyImpl internally so I can’t even hack my way in through reflection as a temporary solution.

The size of arrays I create is unpredictable (aka based on user’s decisions) so a universal template is out of the question too ;/

For context, attached screenshot shows me using blits/copytexture currently allows me to generate 3, 7-slice, fully mipped arrays in a span of 10 frames at barely any visible cost to the user both in editor and on mobile, which is impressive and a great performance win for the app. We can afford the memory jump for now, but I’d love to see the last step possible too :slight_smile:

6951527--817880--Screenshot 2021-03-18 162305.png

If all you need is a “GPU only” texture array, would using a RenderTexture with .dimension set to texture array work? That would not incur any sort of system memory copy or upload at all.

1 Like

That worked! Is there any penalty for using RTs vs “pure” textures?
Also, for future reference: I had to add an intermediate CopyTexture into a workspace buffer since RT->RT doesn’t work and results in white pixels, can’t see any perf hits really.
Just gotta clean it up, match colour spaces, but I’m happy with results already,

Thanks!

No, if you only need the GPU side thing, then RenderTextures are exactly perfect case.

Hi is it possible to maintain a texture only on GPU but compressed?

6964106--820448--upload_2021-3-23_9-14-17.png
I guess you should be able to I think, as long as you can match your format across the chain of CopyTexture calls?
I don’t bother with that because I have a mix of RGB and RGBA textures I have to handle seamlessly

1 Like

Update:
I’m having trouble with some of the texture Blits I’m doing between original textures and my target render texture.
Are there limitations on Graphics.Blit’s compatiblity?
I’ve managed to narrow this down to: ETC2 4bit compressed NormalMap texture not rendering its data properly through a Graphics.Blit, simply switching compression to ETC 4bit seems to fix the issue.

1 Like

Update 2:
Blitting normals into an RT of type

GraphicsFormat.R8G8B8A8_SRGB

and re-writing my normal unpack code in shader to manually select between the #if and the #else cases depending on platform and source of assets (AssetBundle vs AssetDatabase) seems to have worked.