May someone please tell me, according to your experience with 3D models, which scenario consumes more CPU power?
1-Having one big model of the whole level, say, the whole model is 200 vertices. (yes, just one model, nothing under it in hierarchy or above it)
*2-*Having my level as separate models, all in one prefab combining the whole level(AKA: stairs:50 vertices, Walls: 50 vertices, Floor ceiling: 100 vertices [all add up to 200 vertices])
Thanks in advance
My aim of this question is to know weather if the number of models in the scene affect speed/computing resources.
:lol:
It’ll add a ton more draw calls having everything split up. Depends, for a building small like it should be all one, but if it’s a huge map, you’ll need to split certain things up.
Its a far more nuanced question than you might think.
Going off purely the details you’ve given the latter is going to be slower in this case, due to having to make more drawcalls and overhead in terms of Unity housekeeping in terms of dealing with more gameObjects (e.g. frustum checking etc). However any additional effort on the part of the cpu is going to be negligible as you are dealing with so few objects and the actual vertex count is tiny.
This only applies should the level and those objects all be using the same material (and ignoring any dynamic batching Unity might do). If those models have different textures and therefore materials then there is likely to be even less difference since the single model itself will be split into multiple sub-meshes and (i’m assuming) Unity would still need a separate drawcall for each submesh, since the materials would be different and thus requiring a state change between each call.
Your more generalised question would be is it better to have a single mesh vs multiple meshes, but here again other factors come into play and its a question of balance.
For example merging models/meshes that use the same material will reduce drawcalls, but conversely could result in far more work being done by the gpu as it might be processing geometry that never gets rendered.
Then there are arguments over dynamic vs pre-built merging of objects, there is an overhead to dynamic merging (ignoring handcrafted optimisations - i.e. pooling many meshes into a single mesh), but it reduces the potential for sending sub-models to the gpu that wouldn’t be rendered. This argument is also swayed based on whether the models being merged are themselves static or dynamic in the environment.
Then there is the question of whether to merge models by merging materials through texture atlases. Again you can gain in terms of reduced drawcalls, but at the lose of per material control over those different models that you’ve merged. A unified renderer with a single material/shader would address that partly, but thats a whole other discussion.
Generally reducing drawcalls through batching/merging is a common approach and will provide gains in performance as long as its not abused. However as cpus and gpu’s get faster (depending upon target platform) these gains may not be as much as previously and if poorly implemented may even be detrimental.
Overall though whilst looking towards such an optimisation is worthwhile at the start of a project it is probably better not to try and unnecessary merge models until later in development when you can more accurately profile the performance differences such changes make.