Hi! Could someone share some info on what benefits texture atlasing would offer to GPU RD for different meshes, with different mats, but the same shader? Where those mats would be sharing the same textures via atlas or texture array. Thanks!
bumpy bump bump. Anyone have any information on this situation? I can’t imagine it’s that rare. Does GPURD just benefit from the memory locality of atlas’ / reduced fetching, or is there anything else?
GPU Resident Drawer simply makes use of instanced mesh drawing. You send the mesh once to gpu instead of in multiple batches (“draw calls”) which leverages the cost of your CPU preparing and sending all that data to the GPU.
" The GPU Resident Drawer automatically uses the BatchRendererGroup API to draw GameObjects with GPU instancing, which reduces the number of draw calls and frees CPU processing time. For more information, refer to How BatchRendererGroup works."
(source: Use the GPU Resident Drawer | High Definition RP | 17.0.3)
Unity also has dynamic batching, but it only works on rather small meshes and won’t work when you have different materials on them. (source: Unity - Manual: Dynamic batching)
So in that case, one material for multiple objects, sharing a texture atlas would help, to a degree.
For the GPU Resident Drawer, I am still experimenting so I can’t say for sure if a texture atlas would help, or how they actually batch that together…
Hey! Well if they’re different meshes that share the same shader/mat/texture they wouldn’t batch together afaik, since it’s a different mesh… But I think there would still be a net benefit across those different meshes because their shared textures are in the same spot in memory. I’m just wondering how that works, or if it’s really worth caring about VS separate textures with their own individual texture fetches.
yes they would batch if the meshes are different but the material is the same. It works well with static batching but it will also work for dynamic batching, but only up to 900 vertices. A use-case would be the particle system if you choose to emit different meshes, they get transformed and combined into a single “virutal” mesh and sent to gpu instead of one-by-one.
May not be the case actually (not 100% sure). If you read about “bindless rendering” it’s all about using “pointers” instead of binding to slots every time. So afaik, unless we use bindless rendering, different materials still have to bind/unbind these textures each time individually. bindless rendering would solve that by working around that binding and profiting from the textures being in VRAM anyway.
if by “memory” you mean cpu ram, well then it might be a slight advantage to use atlases instead of individual textures, I guess.
If you have the time, create a test case for comparison, look at the frame debugger, see if it batches, look at the rendergraph/timeline and see how that is affected. Measure the cpu and gpu time. I’d be glad to see the results.
right, I’m not able to use static batching or dynamic batching, I’m specifically talking about resident drawer.
Thanks, I’ll read about bindless rendering.
ah I see, I can’t tell you exactly. Maybe ask in this thread? GPU Driven Rendering In Unity