I’ve discussed this project earlier in a previous post.
TL;DR, I want to generate meshes of surfaces described by formulae for implicit surfaces.
I had looked at Marching Cubes to generate it earlier, which had issues with performance and quality of mesh, so I pivoted to using Dual Contouring.
I’ve gone and completed the implementation till generation of points on the surface described by the equations, however I’m lost on how to go about meshing these points together.
In most resources I’ve gone through to understand this algorithm, including this paper and a blog post (boris the brave), the meshing portion of the algorithm isn’t explained too well. The most that has been explained is choosing 4 close cells which have intersections on the edges, calculating their minimal QEF points, and making them a quad (2 triangles). How this is to be achieved is ambiguous.
My question here would be: How should I mesh the points together?
When doing procedural meshing, it’s much more about the exact order of operation than it is about the points. This sequential continuity is what consistently gives you a series of points that you can predictably turn into triangles. It’s all about the triangles in the end.
That said, generating the points beforehand doesn’t make much sense unless you want to apply some holistic general algorithm to “meshify” the point cloud. Usually that’s some sort of convex hull algorithm, some sort of adaptive triangulation, shrink wrap, or marching cubes/tetrahedra, and so on. These are expensive, both in development, and in performance, and give dubious results in the general case. They are only really useful (and developed) in the industries where point data is already available, or if you have a specific category of mesh in mind.
For the more common (and easier) procedural meshing points don’t mean much, unless you have them in some particular order which may give you a leverage.
Now there are ways (operators) that let you introduce some order into an unordered point set, but these are always highly specific, I can’t give you any general answer that will apply to your case 100%. There are also relaxation and smoothing methods you can apply to a marching cubes mesh that isn’t to your liking, as an example.
I remember building a tessellated icosahedron a couple of years ago, and that included creating my own point set dictionary, which also required me to introduce my own vector types (because of how hashing worked in my case it had to be done intrinsically), so one has to go a long way to implement such things and you are expected to understand a lot about 3D meshes.
If you already have closed-form formulas to generate your points, don’t underestimate the power of having a predictable sequence of triangle strips or clusters, or simplified structures that can be easily tessellated post-generation. That’s all I can say really.
Edit:
This torus, for example, is a prime candidate for post-generation tessellation because there is the simplest mesh you can generate from which you can derive any resolution and then polish your points via formulas. This means you can address the topology problem first (even manually, in Blender), and then refine the results to your liking.
Here’s one way you can break any triangle into 4 smaller ones (and via barycentric coordinates you can unlock this further)
Well I was hoping for a solution more focused towards Dual Contouring, although this did help clear some doubts.
I found a blog which made it all click for me, and it does use more information than just sample data and a point cloud.
As the image describing meshing in that blog suggests, we need to shift perspective from samples or cells to singular edges. If an edge has an intersection, it would mean that the 4 cells that all share that edge will each have a point to make a quad, and subsequently triangles.
All I have to do next is make sure the winding direction for gathering points on the surface within those cells is aligned with the sample points on each end of the edge, i.e, the point on the edge which is outside the surface must see the sample points in clockwise direction.
Here’s the result I got for a simple torus:
I also ensured that the opposite winding of triangles are present, so that the inside of the mesh also renders properly.
Well, yes, the winding order comes as a fundamental piece of 3D graphics. Unfortunately it wasn’t apparent from your original post (to me at least) that you lacked this critical piece of knowledge.
If it’s of any help further down the line, there are a dozen of mesh representations that can be used in various algorithmic scenarios and the decision which one to use can make or break your ability to build or even comprehend certain topological operations involving the mesh geometry.
What the GPU uses (and subsequently Unity for simplicity, efficacy, and compatibility) is actually the worst kind of mesh (as a structure), and is chosen strictly to be performantly utilized by the hardware, however it leaves a lot to be desired when it comes to manipulation and processing, instead of just rendering.
I personally always create (or lately recycle) my own structures, some of which are very similar to half-edge representation, when I intend to dynamically/procedurally build complex stuff.
However, instead of the doubly-linked part, I have a different arrangement where I treat edges as directed but transient elements, which can be used as queries for more concrete types I call nodes (precursor of vertices, capable of one-to-many relationship) and triplices (pl. of face triplex, which refers to three nodes via integer ids and represents a precursor to a triangle). This allows for some very advanced manipulation, very fast mesh querying (neighboring elements, uv islands, rings, loops, selection growth …), optimal petal sorting*, automatic explosion of nodes into vertices, splitting of edges, edge rotation, automatic or manual vertex merging, relaxation, selective mesh smoothing, uv seams, finding islands, finding holes, face tessellation, triangulation, extrusion / insetting, fast boundary checks (i.e. whether vertex/edge/face is part of the mesh boundary, which is a hard problem otherwise), and so on and so on…
* btw, Blender actually uses a very similar structure to mine, called BMesh, they just refer to what I call “petals” as the the disk cycle.
Edit:
There is a lot of depth to this, for example I consider this kind of mesh non-manifold (if it had shared vertices)
However with my kind of mesh structure you can preserve 2-manifold mesh with shared vertices and automatically detach the offending faces. Etc, at this point I’m just chatting what is possible and what is the realistic ceiling for this kind of work in general.