I’ve been thinking about this particular issue for quite some time now as when I learned 3D modelling back in the day for the first time it was often hammered into everyone how bad it was to not make sure that your models have correct topology and are laid out in a perfect load of quads or tris. I’m aware this is still the case, but even on beginner tutorials I see now that what I would have thought would be considered ‘bad practice’ is now the norm as I’m regularly seeing people make use of tricks like intersecting models to achieve the looks that they’re going for.
Even the big games developers etc. are doing this to a degree because they frequently make use of quick and dirty interesection tricks and I kind of get it but at the same time I wonder if this is a good idea if you want to be really efficient is this a good idea? I mean at the end of the day I guess if you get the model you want to achieve and everything works it doesn’t matter as much anymore but I do wonder about it.
I’m sure back in the day where polygon limits were a lot more restricted, good topology was a good way to keep the poly count down. Nowadays on console and PC that’s much less of an issue.
By ‘intersection tricks’ I’m guessing you mean what would be like Blender’s Boolean modifier? Using other meshes to add/subtract volume to another mesh? They are handy, but yes, will royally screw up your topology from experience.
I will say, good topology does make your model a lot easier to edit and unwrap. Not to mention very important when it comes to rigging animated models.
I’m not sure what the ‘industry standard’ 3d workflow is these days, but I still model high-poly models first and then break it down into a lower poly model; either by hand or by tools such as Blender’s Decimate modifier. And then more often than not, bake a normal map from the high-poly model.
there are certain conditions in which some techniques are more appropriate than others. you have to understand the situation to make the right judgement call. That’s what breaks 3d modeling into so many distinct disciplines.
Forget any rules you think you know and find the fastest way to make your model, then test it. If there is a problem it will be apparent, then you can fix it.
That means you redo work, but it is much faster and more productive than limiting yourself with made up rules to begin with.
People flock to simple rules and mindlessly follow them because all animals seek the path of least resistance. But if you want to get results, you have to let actual reality guide you, not made up rules.
Check out the Polycount wiki for plenty of archives with artist debunking many of these myths and providing the latest workflows.
Unity arranges geo in triangle strips for GPU consumption but that’s a minor effect. The biggest effect on model rendering performance will be how expensive the fragment shader is, how often it is run. How often it is run is going to be a microtriangle issue (small or thin triangles will cause overdraw) if opaque, and obviously run on all the pixels again if transparent.
So the “topology” is not necessarily the right way to look at it, or rather doesn’t contain enough nuance. Basically your own effort will have near-zero effect vs what you’re doing with LODs.
TLDR:
Use LODs to keep triangles as big as possible, not how many of them there are
Avoid triangles that are too large, or cause long spikes (usually environment)
Your number of verts is usually never going to be the bottleneck, and it’s not really why we use LODs in modern game engines. It used to be on old hardware. Now it’s because of microtriangle issues.
Thanks guys for the really detailed responses! That helped me confirm some things.
@spiney199 By intersection I didn’t mean the boolean, sorry I should have been clearer, what I mean is when you intersect two models ( Simply move or rotate them in the appropriate positions ) to make them look like one whole one without actually linking the meshes together which is a very common trick I see being used by games developers these days to get the results they want.
I’ve been researching this quite heavily so I know what I should focus on with my modelling and how to make everything as potentially efficient as possible. It’s interesting how the old rules that I learned back in the day don’t really seem to apply as much anymore.
Yeah, I think things are changing all the time. Last I read, raytracing is more optimal with regular topology, for example clean regularly-spaced and subdivided geo of even size would be higher performance. It’s really something that seems to be evolving all the time. Virtual Geometry means what we’re talking about previously is nonsense anyway.
Oooh I getcha. What I’d call ‘building levels 101’. I remember opening up the Morrowind Construction Kit as a wee teen and seeing that piles of rocks were just lots of singular rocks all overlapping one another. I guess if it worked back then it certainly shouldn’t be a problem these days.
Good topology use to be a big deal for many reasons, most are obsolete today, it’s still good practice, so if you have that habit and it doesn’t get in the way, it’s better than not doing it:
Technical reason:
Old model where lighted and colored per vertices, topology had a huge impact on how a model look, because light accumulate on peak and bleed through edges. It was due to raster interpolation if the light max fall in the center of a big triangle and the light is compute at the vertices, you get very dull lighting at that center, but when the model move and one vertex align with light, you suddenly had a hot spot flare).
Fanning wasn’t hardware supported, therefore a vertex that fan many edges was duplicated, back then due to fix function fragment cost was “more or less” fixed, and the problem was pushing as many vertex as possible to display as many polygon as possible, vertex splitting was basically doing the same work many time and was inefficient.
Artistic reason:
old model where poly starve, good topology allowed you to better control visual aspect such as the visual flow of the form, and how the light react with it. It’s still true for complex model, but less so. If I not mistaken, the practive was discovered by pixar to handle cinematic level of modeling quality.
adding and removing detail is hard with polygon, good topology allowed you to have better selection and control of your mesh, and simplify the reasoning about it, instead of thinking about each vertex position and triangle shape, you think about edge flow, edge ring, pole placement, and select quad flow, which are closer to the surface logic, it’s much easier to add and remove details thinking about topology and planned for future evolution of the mesh.
surface logic are better expressed as good topology, there is a relation between loop and curvature, once you understand it it makes way easier to reason about shape, for example I basically only use extrusion most of the time, because extrusion create loop into a grid, and they define negative and positive curvature, I have fine control on the planning of the density.
It plays well with animation, having edge flow follows the crease and fold makes it way easier for the model to deform predictably.
Ultimately @BIGTIMEMASTER is right, do what you have to do, and planned for a polish phase later. Retopology is a practice that imply you’ll be doing your model twice, it’s an expression of that attitude: figure out what you need, then once all is done, redo it to clean the loose end.
Tangent:
Given how complex and dense modern mesh is, I wonder if there isn’t an opportunity cost to bring back a lot of the fragment compute back to the vertices, artefacts could even be desirable to induce variation …
Topology doesn’t matter much for traditional render. Your model is polygonal soup and will be rendered as such.
A different primitive type can result in optimization, for example, triangles rendered in strips could give a performance boost, but that’s not artist’s concern.
Normally it is recommended to use Quads for organic model, because quads play nice with commonly used Catmull-Clark subdivision surface algorithm. Quads also allow you to select loops of faces with a single click, because with quad it is easier to detect a loop. There’s also a matter where deformation of skinned model is easier to work with if it is quads and not triangles.
interestingly, Quad based rendering was pioneered in Virtua Fighter 1 on Dreamcast and back then it was considered strange, as normally triangles were used instead. Years later it changed.
A sculpted model even if optimized will be triangle soup.
Now, regarding topology. When model self-intersects polygons can create z-fighting effect, especially when they’re nearly co-planar. However, intersecting models are used everywhere, usually to “kit bash” areas. This started since Morrowind times.
Way back, I hear that topo used to be important to maximise re-use of cached calculated vertex data, but I don’t know how relevant that is now. I believe that importers are also able to optimise for that, so the artist generally doesn’t need to worry about it.
Yes I agree. For now you can use the other debug views. If overdraw is too dense or wireframe too dense then these are still good indicators that those are problem areas.
Worst kind of GPU problem is often that the polys are fine and spaced out from the front but when you go to side/grazing angle, suddenly all these triangles are being rasterised over and over because the shading is done in blocks. This is a big silent killer.
Tunnels and corridors like that in VR can wreck performance, even at low counts.