Built-in Mipmap Streaming does stream from memory and not the storage?

I implemented custom solution for Texture Streaming so this question is just out of curiosity.

If my understanding is correct, Unity uses common asset system for any assets like Scene assets, Resources and AssetBundles.
They are loaded onto memory first, and Unity uses some contents from there.

Which means, Unity streams textures out of bundled asset on memory?
And If it does, when you say memory-budget in mipmap streaming related topic, it actually means VRAM? (When GPU with VRAM used I mean)
I could not find the information in your document.

Thank you.

The texture mip streaming memory budget is related to GPU memory usage.

The texture data is actually streamed from disk on demand.
When loading an asset bundle Unity initially only loads the mapping table. This mapping table indicates where in the file the resource is located (offset and size).
When requesting a specific texture resource, Unity will load the (subset of) mip data from disk into GPU memory. This will be direct to GPU memory on some platforms and via CPU memory on platforms not supporting the optimal path.

So mip streaming can improve both GPU memory usage and loading times from disk by reducing the volume of texture data (number of mips) that are loaded through the pipeline.


Thank you for the detail! I mean, really!!
I feel like I did not need to implement streaming by myself. It was only for precise memory management and possibly faster loading time.

But It begs the question, when you guys do "only loads the mapping table." part?
I assume in AssetBundle.LoadFromFile(Async)?
I mean, we have other options for creating AssetBundle like AssetBundle.LoadFromMemory(Async). But seems like it is not possible to used with streaming from disk because all the bytes have to be on memory beforehand.

If AssetBundle.LoadFromFile do many good things comprared to other methods, it is interesting and detail should be in document I think:)

But anyway, I really thank you! I am really glad to be able to get to grips with Unity detail here.

Oh dear, sorry I found the information.
This document has an answer. Basically LoadFromFile do "better" thing as I questioned in a previous comment.
I'm sorry for bothering you too much! Thanks!

1 Like

Sorry I'm reviving this thread.
Direct loading from disk to GPU does work with LZ4 compression?

I still don't understand, does the streaming save RAM or only VRAM? On Desktop.

If direct loading from GPU to CPU works, then RAM is also saved. But whether it is available or not is very unclear.

Mainly purpose of Streaming in Unity seems VRAM saving.
If direct loading does not work, I do not know Unity’s implementation detail, but transient ram pressure is inevitably happens. So how DRAM is saved is depending on how Unity handles this.

I want Unity to clarify what happens there more precisely in their document.

My understanding is that there is no (or very little) RAM cost for textures unless you make the texture readable on the CPU. There is a checkbox for that in the import settings.

Even without streaming, there is no RAM cost. There is only a temporary copy in RAM on systems that don't support uploading directly to VRAM as the texture is being loaded (it's called a staging buffer).

1 Like

[quote=“nishikinohojo”, post:7, topic: 890671]
But whether it is available or not is very unclear.

Should be available on consoles because of unified memory but not on PC. However, things are changing with direct storage, resizable bar and M1 based macs. Also not sure about smartphones. So yeah, agree, could be better documented.

1 Like

[quote=“c0d3_m0nk3y”, post:9, topic: 890671]
Should be available on consoles because of unified memory but not on PC. However, things are changing with direct storage and M1 based macs. Also not sure about smartphones. So yeah, agree, could be better documented.

I did not think “optimal path” is something just coming from sharing memory. But maybe is.
I just thought lyndon_unity talked about possibility of DMA.
If DMA never happens, it is very misleading. I mean, on unified memory platform, it always has to be “optimal”…

I just tested a large scene in the editor and my RAM usage drops and raises by 5GB as I click the checkbox. So safe to say it does work well on desktop.

I only wish shading debug mode in scene view would actually work and not just make everything invisible.
UPD: HDRP uses separate tool for debug - Window/Analytics/Rendering Debugger -> Rendering->Mip maps

1 Like

Hi all, mip streaming uses the async upload pipeline. This pipeline allocates a CPU side fixed size buffer (which is customizable) and uses it as a temporary buffer staging before in order to ultimately upload the texture data to the GPU. If the texture size is larger than the fixed sized buffer, the buffer will be reallocated to accommodate the entire texture. Other than this shared buffer, there are no CPU allocation that store the texture data, it's only stored on the GPU. The exception is if the texture is marked read/write enabled, in which case the texture will also be stored in CPU memory so that it can be easily accessed.

This blog on the Async Upload Manager has a lot of information on how this upload process works: https://blog.unity.com/technology/optimizing-loading-performance-understanding-the-async-upload-pipeline

If you are concerned about CPU memory usage, you should avoid using AssetBundle.LoadFromMemory which will entire AssetBundle binary file loaded into memory throughout the lifetime of the AssetBundle. AssetBundle.LoadFromFile is lightweight in that it will open the AssetBundle file and access the texture mipmap data as needed.