Cheers for conducting a test with measurable outcomes! Iām eager to hear more about your findingsāthereās no better validation than seeing how things perform under practical conditions.
Yes and my reply was in regards to you saying this
Which one of the main concerns is for in rendering and your own reference (Unreal) even recommends against it.
Yes, which is one of the main concerns about ābandwidth performance disadvantageā.
You keep saying this but itās a bit absurd logic. If you take it to either extreme, 0 bits, or infinite bits it doesnāt make sense, so itās about the right amount of bits for the use case.
Can you lead me to UEās recommendation against using LWC? I read the page you posted and I donāt see anything that says that. They are saying that translated world space performs better than absolute world space with LWC in shaders which is obvious but nothing else. That not recommending against anything. They are giving two options, describing them, and letting you choose which one fits better in your game.
Thatās exactly what I would like to have in Unity. The possibility of using large world coordinates if required. With all the consequences. It is evident that it will have side-effects when used, but there are applications in which Unity cannot be used because of not having the option.
Huh? I was originally responding to the claim that ābandwidth performance disadvantageā wasnāt a concern because of Unreal. Doubles used in subsystems like rendering are considered ābandwidth performance disadvantageā and my response points out that even Unreal agrees (which Unreal was used as the original counter argument).
I donāt know how their statement can be any clearer that it is absolutely a recommendation against absolute world space with LWC in shaders.
This discussion will not take anywhere. UEās documentation is warning about the effects of using LWCs in shaders. They are not saying ādo not use LWCā. Moreover, LWC is default in UE5 but you can change to float coordinates if you donāt want to use them. Iām not asking Unity to change floats by doubles and obligate everybody to use them. Iām asking Unity to have both options and let the people choose whatās better for their games. Just like UE, Flax, Godot, Unigine, ā¦
And if it is slower, it is slower. Iām big enough to assume the consequencesā¦
I agree. Youāre the one misunderstanding my previous posts. I was simply pointing out the incorrect assumption that Unreal has 64 bit rendering on even in conjunction with LWC.
Iām looking forward to hearing about the results of your testing! If youāre open to it, please consider sharing your insights either here or in a dedicated threadāit could be invaluable for others exploring similar topics.
Hello trueh, It is a good question (including a missile into consideration).
The Ship, if it is at the origin under true floating origin motion, will never have visible jitter. Although the missile will go into the distance and begin to jitter, the view of the missile will not visibly jitter due to perspective foreshortening: the jitter will remain sub-pixel from the shipās point of view.
With regards to the missile, note the reference to Hawkingās statement on " all freely moving observers" my article, https://www.researchgate.net/publication/382361412_Relative_information_in_motion. My answer is therefore to make the missile a freely moving origin-centric observer. You can have multiple floating origins.
However, there are two important aspects to this:
1. One is the mathematical calculation of the motion of the missile, which can be done as origin-centric relative motion without hindrance.
2. The other is the reverse motion of the relevant World information (whether it is rendered or not) for Both observers.
I can say, with confidence, that the math and implementation of multi-simultaneous floating origins is essentially proven because multiplayer with CFO/DRS is proven: each moving observer is stationary at the origin and each has a copy of the āsameā World that correctly moves around them, and the shared views of both observers looking at each other maintain correct relative positional correspondence with each other.
However, the above is more of a theoretical statement of the particular type of use case that you describe. I have not made an implementation of that use case yet because, although I could make the missile as a player, triggered by the ship player, it is not really the solution you are after.
So the main stumbling block to doing this as a single player is that one needs to separate World geometry from the rendering process. In Unity (BIRP), the geometry in the active visible scene hierarchy is tied to being rendered. The rendering components are combined with the objects geometry components and information.
To do your scenario properly one would need to manipulate and transform duplicate parts of the World Scene hierarchy and object information, independently of visual and rendering components, and then render whichever observerās view you want at any time (and both camera views if needed).
BTW, I noted your question on the forum but I was not sure if I could give you a proper answer ā I tend to wait until I have done the scenario implementation before offering solutions and I do not have an answer to the necessary separation of data and renderables. Perhaps ECS will provide this??
Addendum: given the title of this discussion, a compromise would to use double precision for the missile path calculations (and your own physics if you are doing that). Casting each new position to floats each cylce. This will not avoid the underlying jitter from the lower resolution outward from the player but may reduce calculation error enough to result in the missile ending in a more accurate position.