DOTS for floating origin

So I’ve just started digging into DOTS again (there have been major changes in workflows since I dug into the MegaCity demo) and so I was wondering if anyone had any examples or ideas of using the most recent iterations for moving meshes and physics shapes around for the purposes of a floating origin for open world scenarios. I am curious how this sort of thing would look/function.

I have not dug too deeply into the new DOTS workflows just yet. Are there any things I should know about the new workflows that would be related to origin-shifted worlds? I’ve found something about WorldMeshRenderBounds and a tiny line about GameObjectConversionUtility.ConvertGameObjectHierarchy, which both seem important, but I’ve not dug into them yet – Are these still relevant for open worlds with the new DOTS entity workflows? If not, what are the best practices so far to get something like a floating origin / physics / meshes functional with DOTS these days?

I was curious about this, too. I created a scene and spawned 200,000 cubes. Then created a simple system that would change all their translation (shifting to the new zero). Running it in the editor and shifting every frame, it showed ~0.03 for the system in the entity debugger. I haven’t looked into hooking into subscene activation to shift on load, but the general idea seems doable.

using Unity.Entities;
using Unity.Jobs;
using Unity.Mathematics;
using Unity.Transforms;

public class RezeroingSystem : JobComponentSystem {
    private GameManager game;

    protected override void OnCreate() {
        game = UnityEngine.GameObject.FindObjectOfType<GameManager>();
    }

    protected override JobHandle OnUpdate(JobHandle inputDependencies) {
        if (game != null && game.move) {
            game.move = false;
            float3 modify = (game.newZero - game.currentZero);
            game.currentZero = game.newZero;

            var jobHandle = Entities
                    .ForEach((ref Translation trans) => {
                trans.Value += modify;
            }).Schedule(inputDependencies);

            return jobHandle;
        }
        return inputDependencies;
    }
}
1 Like

We are currently looking at adding new APIs to make floating origin easier in DOTS.
It is possible to hook all the places yourself, but i wouldn’t call it straightforward right now.

Our goal is to make it simpler & provide samples on how to do it.

31 Likes

Are you guys also considering applying this to in-editor workflows for tools (i.e. for contextual content-authoring?)

For example, I would like to quickly author simple content in-context, far away from the 0,0,0 origin (using existing tools like Probuilder or Polybrush). The workflow I imagine is that I would use my tool, Snapcam, to instantly “snap” around my world to various far away locations, then use Probuilder to quickly author new content for this area. In some cases, I might want to hop into Blender and sculpt something there too, then have it appear in my scene when I hop back. Ultimately, I would prefer to maintain access to as many Unity-native development tools as possible in a new, powerful, open-world context.

Another consideration on the API:

Floating origins can be handled in a few different ways, but hexagonal origins are best for visual-rich games where the viewer looks at the skyline a lot (FPS or TPS games). A hexagonal streaming grid is basically a standard square grid, but with the 4 corners sliced off, and the points (used to load the world) are duplicated and offset, making it a hexagonal cell. This is important because only 6 scene globs must be loaded around the origin (instead of the standard 8 that would be required with square grid), making everything run loads faster. On top of this, LOD and Imposter rendering can now easily be handled via a singular distance from a central radius. This radius can also affect loading/unloading scenes too!

Grid API considerations:

The hexagonal grid is how a game like Spiderman with distant detailed skyscrapers (see my later World Streamer posts) and also Xenoblade Chronicles X and Xenoblade Chronicles 2 (Switch) both do their origin/chunk streaming (I would imagine Breath of the Wild uses this method too since the Xenoblade devs helped Nintendo). Ideally, anything beyond the 6 scenes would be loaded as Imposters in the Editor, and the user could configure the distance in which this imposter rendering occurs before the scene is simply clipped/culled away. @Joachim_Ante_1 – I know you’re a programmer, but being able to visualize how in-between scenes affect the distant skyline is critical when hand-authoring content for open-worlds.

https://www.youtube.com/watch?v=KDhKyIZd3O8

https://www.youtube.com/watch?v=yqY2zCo-0mQ

https://www.youtube.com/watch?v=yqY2zCo-0mQ

Hopefully this stuff will be usable with the new SceneVis collections system that was supposed to be in-progress. Seems like these tools all handle visibility in the editor – it would make sense that they’d all be working together… so, theoretically, the teams involved should probably do so as well…?

Looped worlds:

While you’re at it, looped worlds would be important to have with a floating-origin system. After all, 2d tiled games on the NES / SNES did it way back in the early 90’s… Seems like 3d games should be able to handle this easily in 2020 too, yeah? … Just throwing it out there…

4 Likes

Someone mentioned that the HDRP has an origin shifting camera system do we need to manually shifting origin?

The only bad thing with origin shifting is that any asset on store must be origin shifting compatible i.e. it is not allowed to store global position somhere otherwise asset must register and shift every such field in such component, array and any other memory.

The only thing I can think of is make all positions relative to some chunk system. store additional offset in chunk and retrieve global position only if needed. Better even no use global position at all, just convert position to be local to chunk of my entity and then use in chunk local space. This approach has its own disadvantages but if something like that will become default way to do things in DOTS then every asset from store will be conpatible with infinite world.
This also will be good fit for camera relative rendering I think.

It is only camera-relative rendering. You need to shift origin at least for physics, or if you are using entities positions in some custom calculations.

Think of it from the camera, player or even a multiplayer group perspective the entities in the world can have a global position that is translated to a local position as the game streams in assets/terrain/units within the scope of the camera/player or multiplayer group.

Why not just use the camera/player shifted positions for the physics system, this would reduce the workload (you would only be origin shifting once) and keep both system in sync and with the most precise physics calculations centred where you need them around the player.

Because camera-relative positions calculated only in shaders by simply subtracting camera position from the world position. But yes, it would be great to have a unified floating origin system or just a good API for that in DOTS/Physics.

For loading it is ok but if already loaded part of the world and then move far away, you need to shift positions of old stuff and all mobs and player itself to be again in zero position. Stream of new content in right place dont help there unless you move entire world every frame.
But moving every frame dont resolve issue of stored global positions inside some component or array or something.

World Divided by chunks may be can help:
My thoughts about it:

  • Make chunked world to be core and the only way to create games in DOTS.

  • For simple games scene will consist from only one chunk so no need to think about it

  • Make Chunks to be hexagons (so we always will have central chunk and 6 chunks around)

  • All dynamic objects must be auto moved to another chunks when they cross boundaries

  • Create Struct GlobalPoint with 2 fields (short3 chunk, float3 localpos) that can store global position that can survive origin shift without special processing

  • Instruct Community to never store global position as float3 only as GlobalPoint. float3 can be used for global pos only inside method as local variable

  • When Player Cross boundary of chunk - make origin shift on the very beginning of next frame and apply shift delta to every LocalToWorld component in world.

  • May be allow to choose size of one chunk

  • With chunk size of 1 kilometer diameter (hexagon) we can create worlds 65 536 kilometers in each direction in 3d

CameraRelative rendering will just start rendering chunks from origin one to other around and never need some special processing

Without creating ecosystem inside Unity for infinite world by default (new DOTS runtime is best time to do it) we will end up with unconsistent assets in store that can not be used for infinite worlds

Thinking about world will be something like that:
5401527--548193--upload_2020-1-23_20-26-42.png

1 Like

If you’re assuming you have a “Main Camera” concept, this might be okay, but if you have two player split-screen on the same map, with large rendering distance (with the possibility of two players being on different sides of the world!), this is where the problem gets hairy.
On most games, your suggestion could work – but on games where you have a large rendering distance as well as multiple cameras, this could cause major design problems on a game like the one I mentioned above.

My suggestion in this regard:

To future-proof Unity in this capacity, I suggest Imposter positioning be (loosely) tied to camera/shader display rather than standard entity position rendering – In other words, once entities marked as Imposters (usually large swathes of land, as seen in Xenoblade Chronicles X and 2) become too distant to be drawn normally by entity positioning, either they are culled, or depend upon being displayed as imposters on an as-needed basis by the actively rendering camera(s).

Since these (very distant) chunks would rarely require physics, they would need no additional processing. However, giving a hook for the physics system (in case it is necessary) to handle limited physics processing for anything located where the imposters exist (in case you need a LOD mesh to handle something like Star Fragments in BotW), you could simply position the physics world/mesh in this place for whatever limited physics processing you need.

We would need to be aware of animated imposters too. These could be handled slightly differently – e.g. perhaps they could be handled as if they were stereoscopic RenderTextures of some kind and then lerped between? Either way, it might be a good idea to look into Xenoblade Chronicles 2 to see how they do it. They do animated distant huge meshes all the time – I think it’s about time Unity supported tech like this, considering Shadow of Colossus had it way back on the PS2…

struct GlobalPosition can realized in 3 forms that can be switched by Preprocessor Directives

  • Full
struct GlobalPoint
{
    public int3 chunkpos;
    public float3 localpos;

    //Accesors ...
}
  • Short
struct GlobalPoint
[*]{

    private int _chunkpos; //12bit x, 8bit y, 12 bit z
[*]    public int3 chunkpos => //...
    public float3 localpos;

    //Accesors ...
}
  • Empty
struct GlobalPoint
{
    public int3 chunkpos => return new int3();
    public float3 localpos;

    //Accesors ...
}

In empty mode GlobalPoint will be the same as float3.
Short may be can add more speed because of less size va full (16 vs 24 bytes) default if applicable
Full is really for space simulators where all 3 dimensions is important and needs to be huge

Hope there is no hidden issues in this idea :slight_smile:

Also to retrieve usable float3 absolute position from GlobalPoint when far away from virtual origin we need first substract current world chunk offset from GlobalPoint and then compute absolute position, so we need access to current world chunk offset in any place in code. May be some static field that can be set from world of current running system or something.

May be all we need is separate struct for GloalPosition, and community that use it in all places where they store AbsolutePosition. All another stuff can be created separately and will work with origin shofting and we can create square chunks on top of it, hexagonal chunks, may be triangle chunks… :slight_smile:

Want to see chunked infinity MEGASity demo from Unity :slight_smile:

1 Like

Another note:
Any new system that want to write LocalToWorld component must have additional job that respect ChunkPoint component on Root Object.
Everything will work correct while entities in zero chunk but need additional logic when in any other.

So we need additional rule for AssetStore Asset Review Automatic Test: Shift world a little bit and check shifted positions of root entities. If someone dont correctly shifted then fail :slight_smile:

Solution for floating origin based on chunks can become good fit for chunked voxel games even that they usually use chunks 16 meters long :slight_smile:

In case of such small chunks we need to be able to shift world by any custom logic not only chunk boundary crossing :slight_smile:

Why cumbersome hacks that lead to headaches with object states and cache invalidation instead of a much better and appropriate tool for the job - double-precision? The format works well today in industries (see Unigine: one, two), games like Star Citizen are utilizing 64-bit floats successfully at scale. Rendering API supports it out of the box, Godot is moving towards it through Vulkan implementation. Physics engines like Bullet Physics support it just fine, Unity has more control today over this area than before. Math can be extended for this as well, SLEEF support double-precision.

10 Likes

I’ve tried getting Unity to consider this but they and the community pushed back hard https://discussions.unity.com/t/615126

2 Likes

A lot of misconceptions about the performance and hardware constraints in that thread due to the lack of practical experience, most likely.

That’s interesting. This is an excerpt from Godot engine.

I wonder if @ might be able to implement something akin to this – it’s actually quite clever.

I am working on a floating origin system, and I’m exploring making certain parts of it double precision. Nothing speaks against double precision floats in ComponentData. I use that for Keplerian Orbits etc.

But as awesomedata’s quote correctly points out - you will STILL need to build your local transforms for PresentationSystems from them in a System, anyway. Your GPU doesn’t speak double for vertex coordinates, and even less so for fragments.

And Star Citizen does exactly that as well, I am dead certain about this. They don’t have magical vertex shaders or upgrade your rasterizer units in your GPU. Their code works in World-Space-to-View/Tangent/Screen-Space transformations that are anywhere near double precision. Even though the performance of SC might make you believe otherwise…:slight_smile:

Now, Physics systems are a bit harder, but at long distance, physics objects don’t strongly interact (collide/transfer momentum) with each other at all. So thanks to Unity Physics being stateless, it is practically free and seamless to transfer entities between physics points of references, too.

1 Like