Currently I am trying to overcome the float precision problems that Unity and some other game engines have when dealing with large scale maps. (rendering issues as well as physics calculations being off).
A common way to deal with these issues is to use a method called ‘floating origin’, where essentially you move the player back to xyz 0 and move the world accordingly. For multiplayer however, this gets a bit trickier.
I found this post on the Unity forum with a question about using doubles instead of floats, and a Unity employee answered saying that using "simulation tiles’ is a good solution to this problem.
I believe that with his suggestion, he means to split up the world into multiple sections that each do their own physics calculations. I assume that these sections could probably be used as a center for the camera to do the rendering aswell.
My question is, how would something like this be implemented in Unity? I haven’t been able to find information about a system that I can use to create a second scene with a different origin.
Thanks. I understand that with DOTS I might be able to use camera-relative rendering when using the HDRP, but I don’t quite understand how to re-center the origin of a second physics scene using the com.unity.physics package. With Havok I beleive I can just adjust the bounds of the worlds with some setting, but I am not sure if this still works if the center is, lets say, 1.5 million meters away from the origin (about the size of the world im trying to make)
There shouldn’t be a re-centering; all the tiles would be in the same place. If this is a question about how to do something concretely with DOTS related tech, why don’t you ask it in the DOTS forum?
Good point. I wasn’t entirely sure whether this problem would be completely DOTS related, since mono-Unity also suffers from these issues. But ill move my question over to those forums instead
With “All the tiles would be in the same place”, do you mean that whenever a player goes to a different ‘scene’, it would still need to do an origin shift, because that new scene is in the same worldspace as the first one?
First, a disclaimer: I’ve never implemented something like this. I’ve doubt many people have. I’ve read a good bit about it through the years because it’s very interesting, and because there was a time where MMORPGs where the rage.
Second, a question: Are you doing an MMO? Often, you can miss some good answers if you don’t give more context outside of the technical stuff. The fact that you need a scale of millions of meters, and that you need all the players in the world to be part of the same meta-simulation, makes me think you are doing an MMO game. I you aren’t, there may be better alternatives, like putting players that are close enough to interact in a single simulation, shifting the simulation origin to fit them all. If you are, you probably need a lot more expertise and support than what you’d get for free in this forum.
That said, the idea is that all “simulation tiles” have their center at 0, 0. Clients get data for stuff in the tile they are in and for tiles that are close to them. Now, when a player enters another tile, you can shift the position of the player to reflect their position in the simulation, like when an old platformer changes screen. That’s not the only thing that happens, though; the player usually also changes server, because all tiles can’t be simulated in a single unit. Handling what happens when things in different tiles interact with each other is very complex; you might design your game to reduce the chance that it happens.
Finally, with the tech we have now, you could do your whole simulation in doubles and get rid of all that origin shifting stuff. You’d have to do your own physics, but MMOs tend to have physics requirements that are simple. “Simulation tiles” are very useful for splitting a simulation that doesn’t fit in a single server. That’s their main thing; so they could still be useful.
To answer your question, no, I am not making an MMO in that sence. I am trying to create something like minecraft, a procedually generated world that you can play with like 4-8 people on a decent computer.
Thank you for explaining the simulation tiles, I think I get their concept now.
The ‘edge cases’, definintely seem like something very complex to deal with in this sence, both with physics as well as placing stuff like trees (which might have a ‘base’ in one scene, and some leaves in another one for example).
I think that using doubles would definitely be the easiest solution to my problem, although not verry straighforward to implement in Unity. In addition, the native physics engine would break at some point, for which the simulation tiles would be a good alternitive, but as mentioned earlier, they introduce bugs that have to be resolved.
Okay, for something like Minecraft, you’ll probably want to divide the world into chunks too, no matter how you handle the floating point issues. You don’t want to deal with fitting all that into RAM, and the CPU can’t simulate the whole thing.
There are many strategies to deal with the floating point issue. Here are some ideas that I’m just thinking on the fly. They are probably not very solid, but maybe they help you get clarity or some starting points for web searches. I’d still recommend at least looking at talks from devs of games like the one you want to make.
Don’t deal with it. Minecraft didn’t do a lot to deal with it when I used to play it. I don’t know if they deal with it now, but I remember things got more and more jittery the further you went. It didn’t matter as all important positional phenomena happened in discrete steps, in units of blocks, so there couldn’t be any hard inconsistencies resulting from things moving a few centimeters wrong. Also, there weren’t a lot of reasons for traveling such long distances anyway.
Don’t make your world so big. Or make an in-game mechanic that justifies dividing the game in different zones, like multiple small worlds connected through portals. With normal floating point mechanics, you can have worlds that are around 9km in diameter without too much trouble. Isn’t that big enough for a game between 4 to 8 players, specially if you can have multiple zones of this size at the same time? In the thread that you shared, a person from Havok said it can handle distances up to 100 km; that’s 200km in diameter.
Use doubles. You could do your simulation with doubles, then convert them to floats around the player to use Unity’s rendering. If you are literally doing something like Minecraft, with blocks, doing your own physics simulation is not that farfetched; it’s just a bunch of kinematic controllers, no dynamic rigidbody shenanigans. There aren’t even any slopes.
The worse thing about this idea is that you’d have to homebrew a lot stuff and connect it to Unity. There are probably some packages and assets that could help you already, though. From my place, using doubles still seems like the simplest way to deal with such scales.
Use that “simulation tiles” idea. The tiles don’t have to be the same as the chunks you use to load the world. These kinds of strategies seem very complex, though; I don’t know if they are really worth it for this kind of project, specially when dealing with interaction between tiles.
One way to simplify interaction between tiles could be using two coordinate systems: one in singles for physics, unity transforms, and things related to a “simulation tile”, and one for player interactions in doubles. Then you’d probably have some methods to convert coordinates from one system to another. This would provide an abstraction so that player interactions don’t need to consider the idea of “simulation tiles” every time.
For example, if a player shoots a gun, you’d use double-type coordinates to know if it hit someone; you wouldn’t be able to use Unity’s raycasts, but it’d just work without code related to “simulation tiles” in all interactions of this type.
This “two coordinate systems” idea doesn’t handle collisions that happen exactly at the edge of a “simulation tile”, though. You’d have to get tricky with those. Or maybe do your own character controllers that don’t use Unity physics, so they can use the double-type coordinates, but at that point you are very close to the previous idea.
Some games create simulation scenes dynamically for players that are close enough to interact. Maybe it works for your floating point problem. Your scenes shouldn’t have players that are farther than around 9 km from each other. You’d do a floating origin for each scene, but instead of using a single player’s position as the origin, you’d use the midpoint between all players as the origin. You’d also have to handle players entering an exiting different simulations, as they get close or far from each other, but you wouldn’t need to handle interactions between players in different scenes.
I agree, in my case using doubles is probably the ‘easiest’ way to get around the issues. Also, trying to implement my own physics engine seems like a fun challenge anyway, so ill try to go down that route.
Nr. 5 also seems like a good solution, so that might be a good alternative.
Again, thank you for your help. I feel like I understand these concepts much better now.
No problem. If you need more complex physics, I believe bullet physics can be compiled to use doubles, and there must already be some libraries to access it from C#. So that could be an alternative to handle physics.