Is NetworkCompressionModel implemented?

In the DataStream class I see various Read/Write Packed functions with the
NetworkCompressionModel as parameter, as well as some tests using the compression, but it looks like this is not actually implemented? Also there is no documentation on the topic of compression?
Is it working, if not is it inteded to be implemented?
Currently I'm using Transport 1.3

Yes, it is implemented. You simply need to create an instance of NetworkCompressionModel and pass it to the relevant functions in the API. For example:

var model = new NetworkCompressionModel(Allocator.Temp);

driver.BeginSend(connection, out var writer);
writer.WritePackedInt(42, model);
driver.EndSend(writer);

A few notes about the compression model however:

  • You need to pass in an allocator when creating the model, but it's not actually used for anything (it's a remnant of when the API required one). It's fine to use Allocator.Temp even if you intend to use the same model over multiple frames.
  • Consequently, even though it implements IDisposable, there is no need to dispose of the compression model (the Dispose method does nothing).
  • Initializing the compression model is relatively costly. I suggest creating it once and reusing it.
  • It's not currently possible to create a compression model with custom values (e.g. custom bucket sizes and offsets). But unless you have very specific needs, the default values should be more than sufficient.
  • If you eventually update to Transport 2.0, note that NetworkCompressionModel has been renamed to StreamCompressionModel and is now provided by the Collections package. Also, many of the issues noted here will not be relevant (it's not disposable anymore and it's statically-initialized).
1 Like