To make voxel terrain, and Ive understood that a mesh is composed of an array of vertices and triangles
and the idea is to render only the geometry in chunks, of blocks that are adjacent to air
but since the mesh is described by an array, my question is, how am i supposed to handle placing and destroying blocks?
it seems that one way would be to just add the new block, and then call for regenerating the entire mesh, but that seems like it would be extremely uneficient
The mesh generation can be performed in the background. Some optimization may be performed by modifying only the affected parts of the mesh, by (temporarily or generally) subdividing the chunk into smaller sub-chunk meshes, by delaying mesh generation as to not regenerate the mesh continuously while the user is continuously digging.
I’d go for the sub-meshes per chunk. Given a Minecraft 16x16x256 chunk it would make sense to split the chunk into 16 meshes vertically so you’d get 16x16x16 meshes. User modifications are almost always within one of those sub-meshes. And the fact that they are stack atop each other would allow for occlusion culling respectively flagging some of these meshes as “air only” and thus not generating a mesh.
so lets say I have a byte array that describes my terrain. 0 = air, 1 = dirt etc
normally to generate the mesh, i now iterate through my array, and i find which dirt blocks have air directly above them
now I will only generate geometry from these dirt blocks that have air above them
lets say my terrain is a super flat land
basically on my 16x16 chunk, it will generate a flat plane with 16 x 16 x 4 vertices ( since im only generating visible faces)
are you saying that if I destroy a block ( now theres a hole in my perfectly flat ground ), I am supposed to mark the 9x9 area around that block as “modified”, and now iterate through all relevant blocks in the chunk to create the mesh from scratch?
sorry if it sounds obvious, I just dont know how its supposed to be done
Greedy meshing is the term for the technique. It’s extremely efficient because while processors and graphics cards have had massive increases in performance over the past couple of decades the performance of memory and the buses that transfer data between memory and the CPU/GPU hasn’t really kept up.
For example:
PCIe 1.0 x16 is 4GB/sec and PCIe 3.0 x16 is 16GB/sec. That’s 4x.
DDR-400 is 3.2GB/sec and DDR4-3200 is 25.6GB/sec per DIMM. That’s 8x.
A Pentium 4 scores ~300 in PassMark. A Ryzen 1600X scores ~13,000. That’s 40x.
Another factor is the internal caches of the CPU/GPU. A Ryzen 1600X is estimated to have an L1 at 512GB/sec, L2 at 256GB/sec, and ~80GB/sec for L3. It’s why data-oriented programming is as huge as it is. If you can compress your data to fit into the caches you automatically win orders of magnitude of performance.
thanks for the reply and the link Ive looked it over
i dont think I can greed to this level since I need to keep a different UV for every face right? I can only not create mesh for faces that are not visible to the camera
Maybe it would become more clear to me, in this greedy mesh scenario, if a block was deleted and the mesh would need to be retouched would the entire mesh need to be “regenerated”?
my terminology might be bad, but what i mean by regeneration is recreating the vertex and triangle and uv arrays from scratch, and inserting and removing the new data for the modified blocks on the arrays that define the mesh, and then apply it
my question overall is; is it standard procedure to just remake the whole mesh ( and that is a very fast operation, provided adequate chunking) or is there some trick to modify only certain vertices and triangles? (this would necessitate a lot of references to be kept to know which vertices were affected by breaking or adding blocks)
Remeshing a chunk section really isn’t that expensive. In the absolute worse case (assuming a completely filled section with a transparent block, so every block would have all 6 faces) a section only has 98304 vertices. So abount 100k. This is perfectly fine to update very quickly. Reuploading such a mesh every frame should be no problem for any modern hardware.
Greedy meshing is quite a bit more complex. Not on the meshing part but you would need a special shader as you can not use the texture wrap mode with a greedy mesh and a texture atlas. So the shader has to reconstruct the actual UV from the interpolated fractional world / local position and the actual tile that should be used need to be encoded differently.
I haven’t actually tried this yet, but Unity now provides the Mesh.SetVertexBufferParams method which allows you to customize the actual native vertex buffer format. I wonder what the smallest data format for the position actually is. Since we only need positions in the range 0-15, in theory a 4 bit integer per component would be enough. A similar thing would be true for the normals. Here a 2 bit format would be enough (to have a signed -1, 0, 1). Though of course when using a custom shader we could get rid of most of the vertex data as we could encode the normal in different ways or even calculate it with the partial screen space derivative functions ddx / ddy.
Though this is probably all a bit to advanced. Just re-meshing should be fine.
I wonder then why in minecraft sometimes chunks take so long to load
is it not creating the mesh that is taking long, but instead loading block data from the server and iterating through all blocks to decide which ones need to be considered when creating the mesh?
Depends on the platform and its capabilities. I/O is more likelty the largest factor, be it loading the chunk from disk or loading it off the Internet.
It is not. It is actually the chunk caching system in Minecraft that is of limited size. Normally all visible, once-generated chunks reside there, but there are many historical decisions involved in how exactly they’re populating it.
So once a chunk gets invalidated but has to be redrawn, a new one is ordered, which is then regenerated from data, and this data is sometimes not near enough, but is instead pulled from the serialized data, which includes disk or network transmission, as explained by CodeSmile.
No one (here) knows exactly how Minecraft’s mesh caching works, we can only speculate, but given enough operating memory (which wasn’t great in the original Java source, I remember having issues setting the JVM to reserve more system memory via some -X command or something), this quirk theoretically wouldn’t appear as readily as it does, and has nothing to do with how fast mesh generation really is.