My game will have a large number of relatively generic building objects in it.
Their properties and behaviours will be modified by the components they have.
To create new buildings I am thinking of writing a Factory-like class that will create a new GameObject and add all the required Components to it. e.g.
My question is, will this be more memory and resource intensive than than defining a prefab for each variation?
For example I might have 30 different variations of my buildings, and 100+ buildings in a scene.
I could programmatically create prefabs for each of the 30 variations and save those, and then use those from the Factory class, but it seems like overkill. Is there a big difference in performance between generating many instances via code compared instantiating prefabs?
It kinda sounds like your reinventing the wheel. The purpose of prefabs is that you apply a bunch of components and configure them with preset values. If I understand you correctly, your essentially just saving out these components and configurations into a separate file, which you’re reading from at run-time to create a pseudo-prefab.
But your main question was regarding performance. I think that this is something that you could test fairly easily, by wrapping your
FactoryClass.Make() function call with code to measure the amount of
Time.realTimeSinceStartup that’s elapsed while instantiating your prefab. I don’t have any data that I can reference, but I think that you’ll likely find that the difference in performance will be negligible.
Unless your game is the exception (ie: you’re spawning thousands of prefabs at run-time), this may be a premature optimization. However, I do like your idea of abstracting prefab instantiation behind an interface. That way, if such an optimization were to become necessary in the future, your code would be decoupled from your prefab instantiation code, making it simpler to switch schemes later.