So I have a Platform game object with multiple children. All these children are individual meshes made in Unity (since you can’t join game objects in unity, I “combined” them by parenting them to one game object). However, this leads to multiple draw calls just for just one platform. So I’ve been thinking about recreating this exact same game object in Blender. The parts of the platform that share the same material/texture will be joined together in blender, so that those parts together act like one game object. So the final result will be a platform that looks exactly like the one I created in Unity, except it has less children, resulting in less draw calls.
But is this really efficient? I’m under the impression that Blender models would be more GPU-intensive compared to a model made entirely out of unity game objects.
Individual objects/meshes each have their own Mesh Renderer (skinned mesh renderer for characters). Each Mesh Renderer adds it’s own set of draw calls, so the more you have of them, the more overhead or draw calls has to happen per frame.
Generally, combine all of your geometry into one mesh (per active object, or group up your static level objects into single mesh chunks), and as many of your textures into one texture as you can, for only one Mesh Renderer as often as you can (this is called a texture atlas). You can do this either in Blender, with a mesh/material/texture baking asset in Unity, or do your own method with the Unity API at run-time. You can combine meshes in Unity at runtime using this: Unity - Scripting API: Mesh.CombineMeshes
Depending on your model, a blender object could save you some vertices if you make the model with that in mind. Depending on the complexity of your project, any vertices you can save could make a difference.
I would also imagine that it would be easier on unity to not have to keep track of so many child and parent objects.