So I just want to make sure I understand this right, every object that is an instance will have it’s own seperate lightmap? Since two instanced objects cannot inhabit different positions in the UV lightmap, because there are instances, then they must have their own individual lightmaps, correct?
Also, as I understand, unity will do a separate drawcall for each separate lightmap, since having a different lightmap qualifies as having a different material. Is that correct? So does that mean that an object, instanced 20 times, then lightmapped, will result in 20 different lightmaps for these 20 instances? Does that mean that unity will then do 20 drawcalls for these 20 lightmapped instances?
Won’t that give you alot worse performance than just duplicating those instances, and putting them all on the same lightmap, so then they can all be rendered with the same drawcall?
Just want to make sure I am correct in my understanding there… Am I?
For the purpose of lightmaps. Imagine behind the scenes, Unity is taking all the static objects in that scene, merging them together and considering them as one big object, which it then creates a second UV map which encompasses every polygon. It then creates a single lightmap image that every material used in those static objects, use.
For dual lightmap setups. It creates two lightmaps, a near and a far one. Then the shadow distance in the quality settings is used to determine when near fades to far (far being the one with the baked shadows on it).
So you can have as many instances as you like, and they’ll still be considered instances, except their UV2 coords will be unique to each object.
Lightmap indexes go on to allow you to create up to 256 different lightmaps per scene though. So you could (as I would) split up lightmaps between similar types. One for terrain meshes, one for rocks, one for tree’s, one for buildings and so on and so forth.
So Unity is doing something special that 3DS max can’t do? Cause I know in max two instances can’t have different positions in map channel 2, unless I’m missing something?
But if I understand this right, Unity is making a duplicate of every object in the scene and merging them all into one big object, generating lightmap UV’s off of that, then keeping the objects as instances. So then when being rendered, unity will still reference the instances mesh data as an instance, but when unity calls on an instance for UV lightmaps, it will reference where it was placed in the big merged object?
This kind of runs into a connundrum for me as I was really hoping to be able to UV map, and bake all the lighting down for my scene in 3DS max mental ray, then import that into unity. But it seems I will have to forego instances if I am to do that. Perhaps I will UV map in unity and use the external lightmapping tool.
Well I wouldn’t be thinking it along the lines of “can one do something the other can’t do” cause it’s not really important in this case, you still have to get the files out of Max and into Unity, and that’s where things will fall down. You can have instances and all sorts in Max and other apps just as you can in Unity, but the .fbx exporting/importing will see things differently, so some benefits aren’t really benefits in the end.
Also IIRC Max doesn’t fully let you batch render-to-texture whole groups of objects at once, you’d need Flatiron or something similar which is capable of doing that. Otherwise you’re going to be doing every object individually (this might be different in the latest version of Max, I don’t know).
There’s very little reason to do lightmapping outside of Unity these days. IIRC you can even use HDR images to light a scene via a change in the Beast xml file, and that to me would have been the only reasonable reason to do it externally now, especially if you’re only going to be using mental ray externally (there might be slightly more reason if it was one of the more advanced lighting systems, but even then a lot of that can be done in Unity with some work). It sounds like you’re considering a similar workflow that I was forced to do once (as mentioned in your other thread) and I can promise you, you will find life so much better to stick with Unity for this.
The only time I would bother with baking lighting externally, would be when creating the base light and shading to go with the textures on the high poly models prior to creating the low poly normal mapped in game versions.
I would really like to avoid baking lightmaps in max. But I am doing this for an archviz project that has some things already defined and beast in Unity can’t do it all. It needs IES lights, area lights and the ability to composite an AO pass based on luminosity. Also needs the GI to take into account translucency on the materials. Also it needs to be tonemapped.
I suppose it could be possible to fake IES lights with the light cookies, and get away with not needing a few of those features. I’ll have to run some tests though.
And I don’t ask if Unity is doing something max can’t do as a means to compare the applications. I really am curious what Unity is doing internally to allow two instances to have different spaces on a UV map because thats not something typical of 3D applications.
What unity does is basically create a second mesh for UV2 and the lightmap (thats the same as using a second pass). As it internally would render them like this anyway, thats not so much of a problem. It then uses that to allow meshes to tile textures while the lightmap is correct across different instances.
Doing this yourself would require an asset importer that merges it on import into the same kind of data inside of unity (not so hard, mesh class is pretty easy to work with) and naturally that you render the lightmap onto a copy of the mesh inside of max.