Better GI support for modular assets

Modular assets are a common thing, specielly if you buy assets on the store.

These build corridors and ceilings by snapping parts together. This works fine and are batched correctly etc. But one area that doesnt work very well is global illumination because each mesh will have its own lightmap island. Meaning shadows will not project correctly over the seam.

I tried making a editor script that takes meshes that share the same vertice position in worldspace and stitch these together. It actually works surprisingly well. But you run into problems like unopitmized lightmap UVs. It would be much more efficient if this was done during the lightmapping stage. Look at meshes that share vertices in the same point in world space. And check if they share the same plane orientation, these are good candidates to stitch together in the lightmap so that shadows are projected over the seam correctly. Also this solves problem with mip map artifacts when camera is far away from a lightmap face.

This only makes sense for planes like floors and ceilings so I guess they need to be marked to be treated like this during loghtmapping

Edit: im on windows not Linux

A little more detail. Here is two planes. You can see that they end up on different placs in the lightmap. Chanses are the shadow will be projected wrong in the seam.

Here I have applied my editor tool joining the two meshes together.

You can make feature requests at this link :slight_smile: https://portal.productboard.com/unity/1-unity-platform-rendering-visual-effects/c/561-didn-t-find-what-you-were-looking-for

This is a tricky problem to solve in a general way. I think it’s unlikely that we will ever have a fully automated solution, as that would involve scanning over all instances looking for overlapping vertices. That’s a potentially very large search space. Then there’s the question of what should even be considered a seam. For cubes and quads, it’s fairly straight forward, but with more complex geometry it may not be the case. Vertices are not guaranteed to line up across a seam. The seam may be intentional so you probably need to take normals into account (and there may not even be normals present). There may be a tiny gap in the seam, either due to floating point error or due to authoring, so you’ll likely need some configurable threshold to consider seams. You may not be able to directly stitch the UV layouts, so you’ll likely need to re-unwrap the stitched meshes, which the user needs some control over, etc. Other than that, modifying the users mesh data without explicit consent is a no-go. But perhaps something like this could be exposed as an explicit opt-in optimization step, with manually marked meshes, as you suggest.

I mean I do this already for large scenes so its not impossible. I dont even look at the bounding boxes first. Could get it even faster.

Given that many of your customers uses the store assets this would be a very good feature.

Also at the sametime optimizing the usage of the lightmap space would be perfect. For example identify empty spaces in UV2 and use this for other objects.

My solution lets you define a plane angle offset, default 5 degrees, if the angel between the two joining faces are greater it will skip. It will only join if two vertices are sharing the same space in world cordinates. It works very well actually.

edit: and yes “sharing world space” is configurable. I think default in my solution is 0.5mm but its easy to override.

1 Like

I meant minimum two vertices, you can have more. But I think the normal usecase is two planes sharing the same space with two vertices.

Also, my solution needs to group meshes by reflection probe influence and materials first, oclussion culling will be effected, and lastly there will be alot of unused UV2 space in the lightmap if the lightmapper doesnt account for this. And thats why I think its better to keep the meshes seperate and that its solved in the lightmappihng step.