If you don’t bother to read the thread then don’t post cause everything else you just commented was wrong or out of context.
I read enough of it to get that they split it. That is smart, but could be done without the user knowing or caring, so do it transparently for the user (like others solve this problem).
And why not add an option to use 32-bit buffer in this case if the user knows what he does?
Did you try to use the same origin for all the meshes?
If all the individual meshes have the same origin, e.g. (0, 0, 0), all the vertices from the different meshes would have the same “imprecision” and as such should fit perfectly together. But just to make it clear, I have never tried it out.
Except he said he’s using mesh chunks, not the Terrain system. My first guess is mesh compression.
Like others, I suspect that there’d be solutions to Tiles’s problem that doesn’t involve simply brute forcing larger data sets. Also, as Deon Cadme says, brute forcing larger data sets is only a partial solution when it comes to “infinite” worlds, since it’ll just push the issue out further. (That may be good enough, it may not.)
Not only they can be split up, they almost always are. Take a character, for instance - how often is a high-quality character all the same material? You’re typically going to want at least a skin, a clothing, an eye and a hair material, plus often other materials for accessories or detachable bits. The head will often be separate so the face can be animated. So on and so forth. So even if you do actually need a character with, say, 256k verts, once you split that over the materials and sub-meshes you’re likely to want anyway it’s unlikely you’ll be hitting that limit.
The same applies in other areas, too. If you’ve got a building with a million verts is it likely to all be the same material? And even if it is, wouldn’t you want to cut it up a little to help out with culling in the rendering pipeline?
That’s of course also a valid point. But that different materials splits the rendering into different drawcalls is a different chapter. The 16k limit is about the pure mesh data. 32 bit can avoid gaps and shading problems, it’s one mesh then. And you can work at the whole mesh data without fighting with several chunks for that.
The final solution for my next project is to use another engine now, and leave Unity for the told project. I want to deal with meshes in megapoly range in worst case. A single mesh makes my life for that project this much easier that it’s well worth the change. Just not sure yet which one. Still at evaluating the posibilities.
Nicest solution would be of course when i could stay with Unity. Here i know how everything works. And another engine might bite me at another end. That’s why i wanted to wait for Unity 5. But it seems that Unity 5 still comes with 16 bit.
Of course i did. I did nothing wrong. It’s just that i got hit by a 16k limit, in this case floating point errors that did sum up, and which needed the usual workarounds then. As told, it’s a common technique to extend the geometry at the seams a bit to close such gaps.
I know it is common to close seams like that. Usually the vertices don’t match exactly when they are relatively far away from the mesh’s origin which is due to numerical issues caused by the vertices. If a mesh has the origin (0, 0, 0) and a vertex at (1000, 0, 0) and you have another mesh with the origin at (10000, 0, 0) and a vertex at (-9000, 0, 0), such that it matches the other vertex, this can lead to numerical issues and having the side effects you mentioned. It gets almost impossible to match the vertices if scaling and rotations are involved.
However, if both meshes have their origin at (0, 0, 0), both vertices will be stored as (1000, 0, 0) and match perfectly together from a numerical point of view.
They all had their origin at 0/0/0, and were scaled and rotated equally
My guess is that the inaccuracies appeared while im- and export. I worked with the chunks in my modeler already.
Anyways, while it is somehow interesting to know how it can come to those numerical inaccuracies, the cause is not this important really for our discussion here. Important is that it can happen. That it is more complicated to work with chunks than with a single mesh. That you may need workarounds. It’s simply an example where the 16 Bit limit can bite you.
Using more than one mesh to split a huge mesh up is a standard technique, even without a 16 bit limit. If a scene needs to be optimized, those kinds of huge meshes are often split up. So it is very likely that the issue will bite again, with or without the 16 bit limitation.
However, if I remember correctly, Unity has planned to improve the mesh related functionality for 5.x. I am sure the 16 bit limitation will be considered.
Yes, i know. And we all know the workarounds. But as told numerous times now, that should be the artists decision when and where to split. Makes no sense to split a mesh that is just 5k above the limitation for example, and cause one more drawcall by that.
5.x? Or more like with the new UI, which was promised for Unity 3? ^^
While promises are a nice thing, you better never rely at promises at software development. They may or may not be fulfilled
What I meant is that getting rid of the 16 bit limit won’t magically resolve the problem you had in general.
I didn’t write anyone should rely on it and didn’t write they promised anything. Another interpretation could be that it is unlikely they are going to deal with the 16 bit limit before the work on those mesh improvements starts.
What it would do is allow larger contiguous areas of the same material, so that there doesn’t have to be as many seams to be able to cause precision errors. If the world is small enough that the seams don’t fall within it’s boundaries it’s a practical solution even if it’s not a theoretically perfect one.
What? Yes it does, especially in that case. You’re talking about doubling the memory spent on indexing for a minor increase in required capacity, rather than just splitting the data over two containers and adding very little. A “draw call” isn’t the plague some people here seem to think it is.
The fact that all of this discussion centers around a single example also suggests to me that it was a reasonable call. The occasions where it causes a practical issue are very few and far between, yet it’s saving memory with every single mesh that’s loaded.
Perhaps a solution would be to build in a “DenseMesh” and “DenseMeshRenderer” in Unity that uses a 32bit index, and add an import setting that uses those rather than splitting, along with docs saying to only use that setting where visual issues are observed with the defaults. That way the default behaviour is no different, and we don’t lost its benefits, but for those people where a larger index/less splitting is a practical solution it’s there at no cost to the rest of us.
Out of interest though, @Tiles , you definitely had mesh compression turned off throughout your entire pipeline? That regularly messes with seams.
Memory is cheap. Drawcalls not
Two examples. One was the terrain, better said a level geometry. And this one happens often enough that you have official workarounds for it available. Reasonable enough that every somehow experienced graphics artist knows about the issue, and how to fix it.
The other was my abandoned project because it became too much effort to work with chunks. It’s a showstopper for what i wanted to do.
So we have a practical example where we need workarounds. And we have an example where it was the end of the project. At least in Unity.
Sorry, i thought i had answered this one already. Yes, off. That’s the first thing i do at importing, turn that nagger off. I still don’t understand why it has to be on by default. It’s a nasty default.
That’s kinda wrong statement or view for game development. One or few extra drawcalls that might happen in certain views vs permanent memory use that takes away from everything
Even mobile devices can do hundreds of draw calls with decent shaders but the memory is scarce there.
There is nothing wrong with it just because it is not your point of view. When you have enough ram available, and you can save draw calls by using this memory, then it’s the better way. It’s the draw calls that influences performance. When you can reduce the draw calls then your game will run faster.
Ram, while still a limiting issue, shouldn’t be the problem anymore. Mobile devices three generations back is not the platform to measure at anyways. Unity is multiplatform, mobile phones not the only target platform. And gaming usually uses always the actual hardware. Hardware evolves heavily. Game development evolves heavily. And so should evolve the limits.
I’m not sure if that is an artist view or developers point of view
When we are talking about few draw calls that might happen with an edge case models versus constant memory use that potentially takes away from all aspects of game it matters on all platforms if you want to have maximum customer base. Note that mobile was just an example as there are other platforms too like consoles or webgl.
I’m not against having two mesh filters at all if its possible but doubling the existing one for everyone just cause of some rare situations that may be probably solved my artistic tricks or other improvised hacks is a not that good idea at least yet in my opinion.
Sure, everybody has his own view at the things
Where i still disagree is that it is a “rare” case where 16 bit is limiting in the one or another way. As told, it is common enough that official workarounds exists. It’s just that people are used to the workarounds, getting told “that’s the way it is, deal with it”.
Sticking with the 16 bit mesh component only is imho no solution neither. Best solution would be to have both when possible, and let the programmer / artist decide what to use. Hm, the direct competitors Unreal 4 and Cryengine uses 32 bit now. Would be interesting to know how they deal with it.
Well maybe @Aras or someone else from Unity could maybe share some info if they have one.
UE 4 and CryEngine are also known to have highest spec requirements when it comes to engines and target platforms
I would be interested why most of you defend this limitation? It does not matter if there are workarounds or not as this does not change the question:
Why is there still such a limitation?
What I found:
WebGL currently supports only 16-bit index
Open GL ES 2.0 compliant hardware may support only 16-bit index
I guess for desktop it doesn’t make any difference nowadays if you use 16 or 32 bit indexing but I could be mistaken.
Like I said, you’re talking about increasing the memory required for all models. Also, individual draw calls are plenty cheap, and I wouldn’t even take them into consideration where you’re talking about them averaging over 30k verts anyway.
Yes, overall draw calls are in important performance metric, but this drive some people seem to have towards single draw calls for everything regardless of what it is or where actual bottlenecks may be is irrational. Adding one draw call to an already dense mesh is not going to ruin your performance, and taking the rendering pipeline into account chunking could potentially lead to improvements of its own.
Either way it’s not a regular occurrence. General use of meshes sub-65k polys and/or where the splitting makes no difference are exceptionally regular occurrences.
But this has been addressed. There’s a design goal of reducing memory usage, raising the limitation globally would double the indexing memory used for all meshes, and the vast majority of meshes would see no benefit from it.
I do think that an alternative system for large/dense meshes would be a good idea, though, because then the workarounds for this wouldn’t be required and the solution wouldn’t force a cost on the people not effected by the problem.
Memory may be cheap, but memory bandwidth is much less so.
Why would all models have to use the same memory? Why not support both 16-bit and 32-bit with the engine optimized for the former?
I’m in favor of supporting 32-bit meshes if it allows me to reduce development time. If more performance were needed the mesh could always be broken into 16-bit chunks.