Looks like 2019.3 has been released…maybe we’ll get the new FPS drop soon too?
This doesn’t seem to be possible now: GhostPrefabAuthoringComponent is gone in 0.0.4 and when adding ConvertToClientServerEntity to the ghost prefab set to Client only, the ghost is still just an entity without gameobject presentation.
The GhostPrefabAuthoringComponent is indeed gone, you now have to use the GhostCollectionAuthoringComponent for this instead, and the same GhostCollectionAuthoringComponent is used to generate the code for the collection instead of a menu item. It is mandatory to use it, the support for instantiating ghosts with an archetype has been removed.
So there’s no way to instantiate a game object instead of/with archtype for ghost now?
Not directly, but you can instantiate a GameObject by creating a partial class of the SpawnSystem which implements
UpdateNewInterpolatedEntities / UpdateNewPredictedEntities and creates game objects which you can link to the entities.
Thanks for the reply. I’ve tried you solution, but I’ve run into a problem of linking an object to the entity:
struct LinkedGameObject : ISharedComponentData, System.IEquatable<LinkedGameObject>
{
public GameObject instantiatedObject;
public int id;
public bool Equals(LinkedGameObject other)
{
return id== other.id;
}
public override int GetHashCode()
{
return id;
}
}
protected override JobHandle UpdateNewPredictedEntities(NativeArray<Entity> entities, JobHandle inputDeps)
{
for (int i = 0; i < entities.Length; i++)
{
var entity = entities[i];
var position = EntityManager.GetComponentData<Translation>(entity);
var rotation = EntityManager.GetComponentData<Rotation>(entity);
var obj = Object.Instantiate(prefab, position.Value, rotation.Value);
var linkedObject = new LinkedGameObject
{
instantiatedObject = obj,
id = entity.GetHashCode()
};
EntityManager.AddSharedComponentData(entity, linkedObject);
}
results in error
And building the LinkedGameObject array first and then passing it to the job also results in managed data being not allowed in jobs error.
UPDATE:
Calling
inputDeps.Complete();
at the beginning of UpdateNewPredictedEntities seems to be doing the trick, but I’m not sure if this is the right approach.
If you want to setup the link in that direction you need to wait for the job performing the delayed spawn. It is passed in as an input dependency so you would have to do “inputDeps.Complete();” before the loop. Doing that will introduce a sync point which you can work probably work around in this case - but you will need sync points somewhere to work with GameObjects.
Yeah, that’s what I did and it worked. Thanks.
Also, as a side-note: I feel like this NetCode package takes more responsibility of implying how game state management and rendering should be done than necessary (Hybrid Renderer, really?). It would be nice to have a more straightforward way to sync the net state and the game objects even with implied performance cost (a lot of people will be doing this anyway). Right now all of this is going to be done manually.
Any news on that topic? I’m looking for networking solution for my early access fps game - already released but for now singleplayer only.
2 main requirements:
- low latency (highly competitive gameplay)
- not too difficult to implement - single person would be able to do it in max 6 months
What I found so far:
Photon (Bolt) requires additional services like GameLift to run on dedicated servers.
SpatialOS looks good but it’s pricey + meh latency + impossible to have player’s hosted servers. At least first server (normal cost ~180$/month) is free.
DOTS Multiplayer + PlayFab Multiplayer Servers - I’ll probably go this path as I’m using PlayFab for LiveOps. Probably most difficult route but in the end (in 1-2 years) will be better than others.
I’ve seen Unity also acquired ChilliConnect. They have few advantages over PlayFab (better leaderboards, C# cloud scripting) but lack multiplayer servers. Would be great if in the future DOTS Multiplayer + ChilliConnect + Multiplay could work together.
2nd question: how Multiplay operates compared to PlayFab Servers (docs)? What service is easier to implement?
TLDR
What is (or will be in few months) the easiest way to run DOTS Multiplayer servers including orchestration and matchmaking?
DOTS Multiplayer + ChilliConnect + Multiplay (Hosting & Matchmaking) is what we are focused on making sure works exceptionally well together, is easy to setup and our samples are configured with it by default.
Here is several presentation sbout hosting & match making from Unite:
We are proving all of it out ourselves using DOTS shooter to make sure it works well for both development & deployment. Internally we are already using multiplay in playsessions for DOTS shooter, but the self serve functionality of multiplay is still being worked on before it can be released.
Good to know, Ill start making multiplayer and once it’s ready reconsider switching to Chilli. I’ll have few options:
- Playfab + Playfab Servers (Azure)
- Playfab + Multiplay (GCloud or AWS)
- ChilliConnect + Multiplay (GCloud or AWS)
That’s the DOTSSample, right?
I’m not sure if this has been discussed before, but I’m curious what techniques you’re going to utilize to save bandwidth.
I’m planning to use DOTS Multiplayer for large scale battles with several hundred players per server, in order to archive this there must be some optimization going on, like;
Only send data that has changed since the last tick, e.g. when a player presses a button send that info to the server once and once if he releases the button. So only state changes should be synced instead of a constant stream sending the same data over and over again. This way you can get rid of a lot redundant data.
Distance related tick-rate and data compression. Players that are far away from you don’t necessarily need to be updated that often and accurate as those who are close up to you. So it’s basically a LOD system to save bandwidth.
I haven’t really touched the NetCode package yet, so I’m asking if those systems can be easily implemented, or are ideally on the ToDo list.
All the snapshots sent from the server are prioritized and sent with a cap to the bandwidth used. We prioritize and send the most important state until we reach the cap, the age is taken into account so anything not sent will have higher priority and be more likely to be sent next frame.
We also have delta compression in place which compresses well enough that we can fit ~80 players per packet in DotsSample.
The commands/inputs from the client to the server needs to be sent repeatedly, only sending changes would require reliable messages which would significantly increase the input latency. We also need to send acks of received snapshots frequently for delta compression of snapshots to work well - and commands compress really well so it only has a marginal effect on bandwidth.
For snapshots we are planning to skip sending ghosts which did not change and/or are not visible. It will save some bandwidth, but our delta compression is really good at handling static objects so I don’t expect it to make a huge difference for that.
It is already possible to use distance as a factor for prioritization so things far away are sent less frequently - see https://docs.unity3d.com/Packages/com.unity.netcode@0.0/manual/ghost-snapshots.html#distance-based-importance
I’ve noticed when playing with the multiplayer cube example and my own implementation that there are some simulation inaccuracies between the client and server due to the server not receiving client input packets in time. This happens when running the example with even a moderate latency 60+. Does anyone else get this issue or am I doing something wrong? I would think the example should accurately out the box. Are there still impromemts to be made to command buffering/transmission to improve this? At the moment you can notice the client predicted cube jittering back and forth when changing input directions even at low latency simulations. At higher latency/packet loss I could understand this but at this level it seems an unacceptable level of misprediction?
I also recognized this behaviour. Already wondered why nobody else mentioned this.
Logging the Ticks on the Server when the CommandData is received shows that the CommandData is sometimes too late.
Here is some log at 100ms recv/send delay.
The “Applied tick” is the tick which was last received and should match to the server tick for appropriate simulation.
The NetDbg delivered by Unity shows the CommandAge (oranage Line). The CommandAge appears to go up and down like a wave form. This can be explained by the continous adjustment of the estimated ServerTick on the client.
White line = No delay of command data
Orange line below white line = CommandData received earlier than needed
Orange line above white line = CommandData received too late
At 0 ms send/recv delay; commandData always on time:
At 100 ms send/recv delay; commandData sometimes to Late
At 300 ms send/recv delay; gets worse
Which CommandData is applied on which serverTick is calculated in the client. I had no closer look at the calculation but it seems wrong for larger latency.
Any ETA on this feature?
Actually, this could save bandwidth for cases where something changes extremely rare. Like a door – it won’t get it’s open/close state switch multiple times per second. Why would we need delta compression for something that doesn’t really change? Does this also mean it’ll make the ghosts possible to despawn/spawn based on distance? It would’ve been awesome to have not just LOD for ghosts updates, but also for their presence.
Hm, this sounds like the bug I mentioned: https://forum.unity.com/threads/dots-multiplayer-synchronization-lags.757415/#post-5048408
That was with the old multiplayer asteroids sample, I haven’t seen this in the new one but I haven’t done any accurate testing. Could still be the same issue, just moved to a different place.
Sadly, I don’t have the time right now to investigate.
@Flipps Debug.Logs are really bad for network testing. The delay it takes for logging distorts every measure.
Write the log data you need to a List struct and save it to disk when quitting.
@Enzi I will do that, but i think the results won’t be different, because the CommandAge log in the NetDb shows the same results. Or maybe i am completly wrong.
The sluggery movement is really easy to reproduce. Either with the NetCube Sample or the Tutorial on the NetCode Manual page (with a send recv delay of 200+ms)
The problem is that the calculation is based on command age reported from the server to the client. When your ping is high it takes too long to get this updated result and the client ends up over compensating since the calculation is based on several 100ms old data.
Thanks for investigating this, the graphs really helped. I’ve filed a bug and we will improve this going forward.