What renders faster, more pixels or more geometry?

Imagine that you have a 3D model of a simple cube (100 x 100 x 100), and a cylinder stacked above it (diameter 100 so it overlaps the box perfectly), like a 2-layer cake.

Now imagine rendering this cake from above at full screen size. It might look like a circle inset within a square.

Version A has a simple cube with 2-triangles per face, so the full top surface of the cube is rendered, then the cylinder is rendered above it. Since it’s full screen, most of the pixels that were rendered for the cube top are replaced by the pixels of the cylinder (wasteful). The z-buffer hides the part of the cube covered by the cylinder.

Version B has a cube with a circular hole matching the cylinder above it cut out. Rendered from above, it looks no different, but in geometry, there are many more facets and vertices in the cube because of the circular hole. The advantage is that only the visible pixels of the cube are now rendered. When the cylinder is rendered on top, there are minimal overwritten pixels.

Overall the question is this: is it better to create holes in the geometry to minimize pixel render overlap and minimize pixel fill areas at a cost of greater vertex and possibly facet count, or is it better to create simplest geometry and let the z-buffer sort things out, even if many pixels rendered will be overwritten by others?

Anyone have any empirical evidence?

The amount of processing needed to put pixels on the screen is several orders of magnitude less than the amount of powered needed to figure out what pixels do not need to be rendered. On the other hand, there are almost no calculations done to replace a pixel that has already been rendered.

Cutting into the geometry will cause much more harm than good because the engine realizes this and simply won’t bother figuring out what pixels don’t need to be rendered and will just render the entire object at all times. Cutting into the geometry will increase the number of pixels and tris rendered in this way considerably and will actually decrease performance.

The simpler solution is often the best solution.

Neither of these is better, sorry. It depends too much on your scene/game/rendering.

If you are using advanced pixel shaders (reflection, refraction, lighting, shading, etc) then pixel overlap is going to cost you, since every pixel that got overlapped still had to do its shader calculations. At large resolutions this starts to add up.

If you are using skinned animations, depth mapped shadows, occlusion, vertex lighting, transparency, etc then a large number of triangles are going to hurt you. Skinned animations tend to happen on the CPU and definitely take a chunk out of your framerate as the vertex count gets higher. Transparency can involve sorting on the CPU, and sometimes even geometry modification (splitting meshes, triangles) on the CPU as well: another hit.

This document is a good example of how you can’t just choose one thing to be better when optimizing: Optimizing Graphics. They start off by saying you should combine everything, then they say only things with the same materials will gain from this, then they say that if you are using per pixel lighting that combining can start to hurt if there are lots of lights.

It is not cut and dry, one of these is not always better than the other… unfortunately.