I have lost of models that share a single atlased texture. I am interested in using texture streaming but I am wondering if Unity will be able only load in the parts of the image that are near the player? Or will I need to use individual textures for each object to get texture streaming to work correctly?
Unity would load just the reduced mips if all of the objects using the atlased texture are able to render at dropped mip levels. As soon as one object needs full resolution, the mip streaming system will load the whole texture. The mip streaming system doesn’t split up the texture into sub sections.
The virtual texturing system does however work at sub texture level so could give you the support you need, dependent on which platform you are running on.
Why not use Texture Arrays instead of atlas? Cost is basically the same and you get mips plus it is just the single GPU resource like an atlas would be.
I’m not sure texture streaming works with arrays, but if it does it should have the same limitations as an atlas.
When using an array, it’s the shader who decides which slice to use for each pixel, on the GPU. Meanwhile the texture streaming is running on the CPU using a limited set of precalculated heuristics based on mesh UVs to estimate a single texture density value for each renderer based on their size on the screen.
Therefore, there’s no way for the streaming system to know which slices an object is actually going to display and it would load the necessary mips for all slices. It also cannot take in account any shader-calculated UVs which result in a different density than the one in the mesh (like triplanar, for example).
HI, Can You please give me some ideas how u’re implementing texture streaming for mobile? and what should be the efficient values of these parameters for performance?