Should I delete polygons that are not seen by the player?

I’m using ProBuilder to build levels but I’m a bit split in regards to what the best way is to make levels since I’ve seen a number of different methods. One one hand I have seen people use Probuilder to make Quake style brushes and then make levels with that. One the other hand I have also seen people take meshes and then flip the polys inside out thus giving them instant internal “walls” without having to resort to the “1 wall 1 mesh” typical of old school mapping.

But my question is, is the former method actually smart? If you use the “brush method” then won’t you vastly increase the polygon count even if most of the polys aren’t even seen? Which method is better?

yeah but wait until you got things finalized before you optimize like that. maybe you need some wall somewhere and the thing to do is just flip one around or w/e. better to leave options.

1 Like

Not really.

Unity supports occlusion culling, provided by Umbra. So, hidden surface removal can be handled by the engine, and you don’t really need to do anything, aside from correctly marking static objects and baking the level.

https://docs.unity3d.com/Manual/OcclusionCulling.html

Quake was designed for following hardware:
75 Mhz Intel Pentium.
8…16 Megabytes of RAM.

Therefore optimization was quite severe by modern standards. Modern hardware optimizes things differently, and designing things in probuilder does not actually produce same degree optimization as in Quake… but same degree of optimization isn’t even needed, because hardware is much more powerful.

So, while removing unused objects makes sense, thinking about removing polygons is largely unnecessary, and should be only done at last resort. Until then you should let engine hidden surface removal handle it.

4 Likes

Just because a wall isn’t seen by the player doesn’t mean that the effects of the wall aren’t. I tend to leave geometry that is out of view simply because it may cast a shadow or reflect a light. I also want bullets to bounce off of them or at least give me a ping sound effect when it hits.

1 Like

Umbras method of removing backfaces is not that great. We are looking at removing unseen polys manually to improve filll rate. (VR game though). I did a noclip run in Half Life Alyx and they dont have a single unnecessary poly and that game is a gorgeous looking VR game

Colliders and render meshes are two different things though

1 Like

Umbra is “good enough” for most cases, and VR is a special scenario due to very high performance constraints. The OP did not indicate they have special constraints.

Noclip also won’t give you a correct idea of what’s being culled. You’d need wireframe mode instead. it is highly doubtful that they clipped all the way down to polygon level, as clipping at that level will produce overhead which will outweigh benefits on in hardware accelerated renderer.

If you are concerned with FILL rate, then you should be able to can kill off individual primitives at geometry shader stage based on some criteria, but you’ll still need “broader phase” culling done by the engine before that.

Its not a run time culling, they have removed every single backface not seen by camera at design time.

edit: But no might not needed for most scenarios. Though if fill rate is of concern (like a open world game). It might be a thing to consider.

Another thing to note that has to do more with memory and less with rendering. Even if part of a mesh is culled out by Unity, it’s vertices are still loaded into memory and processed even if they are not visible. It’s a good idea to do a pass on art geometry for memory budget and a pass for removing what doesn’t render on screen at any point. I wouldn’t get too caught up in this aspect for higher end platforms like desktop, but for mobile and VR applications it’s a necessity. As others have said, make sure you are not removing important shadow-casting elements, make sure your cinematic or fly-through cameras do not go behind or above objects you’ve removed backfaces for. I suppose in short, know where your camera can go in all cases before you make this optimization pass.

3 Likes

Also get the low hanging fruits first. Here is a good example from our game that I should fix. these rocks

Seen from player side

Seen from other side (also PLM fubars even though these have double sided gi)

Also i will save lightmap space by removing unseen faces.

Same here these mountains from player perspective

Complete waste from the other side, more so in UV space since they are quite low poly

I just need to come up with a good procedural way of removing it :stuck_out_tongue:

It would have been cool if there were engine support for this. for example by using the navmesh the engine can test whats visible for the player, etc

Project acoustics uses the navmesh (or a custom mesh) to verify were the player can be so that it does not create unnecessary listener probes. works very well

There are two different things:

  1. Removing polygons that are never seen in current level.
  2. Removing polygons that are never seen in current frame.

#1 is static, #2 is dynamic.

Umbra deals with scenario #2, what you see with noclip is scenario #1.

The thing about removing invisible POLYGONS is that you may actually increase workload and memory use for GPU by doing that, especially if your level is build using a modular kit. Because “Wall” and “Wall without some polygons” are two instances of geometry instead of one.

In the end it is the the situation where you really want to consult with Unity Profiler.

2 Likes

Here is a backface threshhold of 5. From the player side

5822215--616912--upload_2020-5-8_19-15-32.jpg

From the back
5822215--616915--upload_2020-5-8_19-15-56.jpg

It doesnt cull a single polygon. So I wouldnt recommend relaying on Umbra. Plus, all that wasted UV2 space.

Whats interesting though, rendering what you see above takes 4ms on a 1080 TI, fillrate on a Valve Index is a bitch

Because UMBRA is being used with GPU-accelerated renderer, it is likely operating on whole mesh and does not work on individual polygons. See the article on lcclusion culling.

(Without occlusion culling)


(With occlusion culling).

Splitting level geometry into polygonal batches with different visibility was a thing in time of BSP levels, at some point approach has shifted.

If you need something more aggressive than that, you’ll need to script it.

There was an article about someone who discussed building lists of potentially visible objects using GPU. I have not found the original article, but this is a variation of the technique:
https://developer.oculus.com/blog/occlusion-culling-for-mobile-vr-developing-a-custom-solution/

Umbra is not GPU based, unreal uses a GPU powered occlusion system
Umbra is CPU based.

Backface threshhold actually only saves on memory. Its used to remove occlusion cells that only can see backside faces.

"Is being used WITH".

I think we are getting off track here. My point being if you use lightmapping you can save alot of uv space and in the end setpass calls because you can get more objects onto the same atlas which reduce setpass. And you get better fillrate since GPU does not need to spend time on faces that are not seen (on objects that are seen something Umbra does not help with).