Unity Workflow and Performance Questions

I’ve been doing some research into issues that may impact performance within Unity and one of those issues most of us are well aware of is the performance degradation when, for example, you have a huge structure or level built as one mesh; Unity will render everything even though most of it lies outside of the camera view. The solution to this is to split everything into reasonably sized chunks so that Unity will render only those chunks visible on screen. The flip-side side of this issue is (generally speaking) the more meshes you have on screen at once the more performance will suffer, and the solution to this is marking everything immobile as static and let Unity automagically batch them for you, or combing the meshes yourself.

  1. I have a “floating island” that was built in Maya but is currently one big mesh. I assume, for performance reasons as explained above, I should split the island into several reasonably sized chunks. Is there a “magic number”, so to speak, for how big/small I should size these chunks relative to the view screen (assuming the view is a Diablo style 45-degree-top-down view)? I’m sure there must be some sweet spot.

  2. I realize that different hardware has different limitations as to their maximum texture size. What are our options for targeting multiple platforms? Should I have a version of each material containing a 4096x4096 texture for the PC, another containing a 2048x2048 for the iPad, etc… and hook them all up via a custom editor script during build time?

  3. Somewhat related to #2 - if my max texture size for a particular platform is 1024x1024 and I know this won’t cut it for a decent sized mesh which absolutely needs a 4096x4096 texture, is it reasonable to create four materials (each containing a 1024x1024 texture) for this mesh? Or are there other approaches I should consider? Basically I would love to know what the approaches are to re-targeting a PC game with hi-def textures to platforms like the iPhone or iPad which have smaller max texture sizes.

Many thanks in advance,

El Diablo

I have no idea how big or small ur meshes should be, but if u cut them inot pieces you could use occlusion culling so the camera wont render what it doesnt see, else it will render it.

Occlusion culling is great, but it has limited use in my case because I have a diablo-style top-down view.

bump

Couldn’t you import that as a terrain, as Unity’s terrain system has LOD and I believe visibility culling.

a Top down style on a giant map makes occlusion culling easier, not harder.

i.e. if you cut your giant mesh into something 2/3rds of a screen width in a grid-like pattern, you can make sure only 4 blocks are ever rendered at any one time, with all others being occluded out. Because you have a fixed camera directionyou can design your maps to take advantage of the fact that no objects that are more than a screen width’s distance apart have to be rendered at the same time.

As far as Texture sizes, iOS supports 1024x1024 on Arm6/OpenGL ES 1.1 devices and 2048x2048 on Arm7/OpenGL ES 2.0 devices, iPad is an Arm7 device. And I don’t know of too many desktop GPUs that don’t support 4096x4096.

As for downscaling textures, you could most easily just lower the texture quality. Doing a direct texture split would be difficult as you would want to avoid having a Texture cross over that seam, but there are ways to automate creating Texture Atlasses, which could be used to subdivide the Texture parts in your main texture.

It also depends on how many of these textures you plan on having. It’s important to note that the memory budget is tiny on iOS compared to desktop, and that on an iPad1 having ~ 5 2048x2048 RGBA32 textures would max out your entire memory budget for the device.

That’s not the case; OpenGL (and I assume Direct3D) won’t render anything outside the camera view. There’s still a performance benefit in not uploading vertices that won’t be rendered anyway, but static batching takes care of that (it doesn’t mean that the entire combined mesh is always used, only the parts that are needed). The only real downside to static batching is that the combined mesh data can be quite large and increase the file size by a fair amount.

–Eric

Sorry, I should have said that differently. What I meant to say was that in a top-down view, there’s not much to cull, so focusing on optimizing for occlusion culling will not pay me huge dividends in my particular case.

These are my thoughts exactly, which is why it makes sense to subdivide a huge mesh into smaller pieces. You said 2/3 of a screen width; would you happen to know if this is the “sweet-spot”? I’m sure that, mathematically speaking, there must be a “sweet-spot” given a particular view-port, and I’m wondering if anyone would know what the formula may be.

Whoa!!! :open_mouth: I hadn’t thought of that!! It seems that it’s not a simple matter of down-scaling textures, re-targeting to a different platform and clicking ‘Build’.

Not in this case, at least not to my knowledge; if it’s a single huge mesh and I’m only viewing a tiny part of it then I assume it will need to render it in its entirety. This is what I seek to avoid by splitting it into chunks smaller than the view size.