About the com.unity.serialization Package

First of all, incredible package, i love it, though I have not seen it advertised anywhere and found it accidentally. Also, i dont see a section for it in the package’s Forum, so i posted here.

I dont know if this package is going to remain public, or if I should forget about it because it will become internal, but it’s pure gold for me.

About implementing adapters, i believe it can be beneficial to complement IAdapter and IContravariantAdapter (either Json or Binary) with a generic IGenericAdapter with the likes of this:

    public interface IGenericBinaryAdapter
    {
        void Serialize<TValue>(in BinarySerializationContext<TValue> context, TValue value);
        TValue Deserialize<TValue>(in BinaryDeserializationContext<TValue> context);
    }

I have a use case where i want any class or struct, that implements ISerializationCallbackReceiver to get called OnAfterDeserialize() and OnBeforeSerialize().
With the current implementation, i cannot target structs “generically” through an adapter, as contravariance doesnt work on them, so i would have to create an adapter specifically for each struct that implements it.

With a generic adapter i can target any class or struct, and also having the generic type in the method of an Adapter can be beneficial to other use cases i believe, specially for targeting structs implementing any interface.

Also, i’d like you to consider implementing a similar adapter, but passing down a class, that represents the property we are serializing/deserializing (Like the Property<TContainer,TValue> from Unity.Properties), as this will allow us to look at the attributes of the property so we can have System.Attribute’s that affect serialization.

Still, i love this package, reeeeally big thanks :slight_smile:

1 Like

So, auto-responding myself xD, for the ISerializationCallbackReceiver problem, this could be achieved by just using a PropertyVisitor before and after the serialization. It adds the overhead of 2 full visitations of the object graph, but is doable, it just seems like a waste considering that we already are visiting the graph.

I agree, I love it too - but only binary! :slight_smile:
If you dabble with the Json part … I bet you’ll be like: :face_with_spiral_eyes::sweat_smile::eyes::(:roll_eyes::hushed::rage::rage::rage::rage::rage:

I’ll come back to this post tomorrow. Want to double-check whether I haven’t done something like this. Can’t remember.

Okay so I don’t know how this fits into the question, I’ll just post what I’m doing in that project where I binary serialize classes with native collections in them.

First, this is the To/FromBinary starting point with a byte[ ] as input/output that works with a generic type:
static class Serialize

public static class Serialize
{
    /// <summary>
    /// Serializes the object to binary using the provided adapters.
    /// Adapters provide control over how serialization is processed.
    /// </summary>
    /// <param name="obj"></param>
    /// <param name="adapters"></param>
    /// <typeparam name="T"></typeparam>
    /// <returns></returns>
    public static unsafe Byte[] ToBinary<T>(T obj, IReadOnlyList<IBinaryAdapter> adapters = null)
    {
        var buffer = new UnsafeAppendBuffer(16, 8, Allocator.Temp);
        var parameters = new BinarySerializationParameters { UserDefinedAdapters = adapters?.ToList() };
        BinarySerialization.ToBinary(&buffer, obj, parameters);

        var bytes = buffer.ToBytesNBC();
        buffer.Dispose();

        return bytes;
    }

    /// <summary>
    /// Attemtps to deserialize a byte[] to the specified type using the provided adapters.
    /// </summary>
    /// <param name="serializedBytes"></param>
    /// <param name="adapters"></param>
    /// <typeparam name="T"></typeparam>
    /// <returns></returns>
    public static unsafe T FromBinary<T>(Byte[] serializedBytes, IReadOnlyList<IBinaryAdapter> adapters = null)
    {
        fixed (Byte* ptr = serializedBytes)
        {
            var bufferReader = new UnsafeAppendBuffer.Reader(ptr, serializedBytes.Length);
            var parameters = new BinarySerializationParameters { UserDefinedAdapters = adapters?.ToList() };
            return BinarySerialization.FromBinary<T>(&bufferReader, parameters);
        }
    }
}

I have an interface for the classes/structs that I wish to serialize from within, rather than putting everything in adapter classes:
IBinarySerializable interface

    public interface IBinarySerializable
    {
        unsafe void Serialize(UnsafeAppendBuffer* writer);
        unsafe void Deserialize(UnsafeAppendBuffer.Reader* reader, Byte serializedDataVersion);
    }

This is a data container class which accepts generic structs (unmanaged) that implement the IBinarySerializable interface:Generic Data Container

public struct LinearDataMapChunk<TData> : IEquatable<LinearDataMapChunk<TData>>, IDisposable
    where TData : unmanaged, IBinarySerializable
{
    private const Byte ChunkAdapterVersion = 0;

    private ChunkSize m_Size;
    private UnsafeList<TData> m_Data;
 
    public static List<IBinaryAdapter> GetBinaryAdapters(Byte dataAdapterVersion) => new()
    {
        new LinearDataMapChunkBinaryAdapter<TData>(ChunkAdapterVersion, dataAdapterVersion,    Allocator.Domain),
    };
 
    public LinearDataMapChunk(ChunkSize size, UnsafeList<TData> data)
    {
        if (data.IsCreated == false)
            throw new ArgumentException("UnsafeList<TData> passed into ctor is not allocated");

        m_Size = math.max(ChunkSize.zero, size);
        m_Data = data;
        ResizeListToIncludeHeightLayer(m_Size.y);
    }
 
    // ... rest omitted
}

Then I have a base class for my adapters just so that every adapter carries a version - this is crucial if you want to be able to support older versions of already serialized data (eg a user’s savegame, or user content!) as you update your serialized classes with more fields, different types, etc.
Versioned Binary Adapter (abstract base)

public abstract class VersionedBinaryAdapterBase
{
    /// This represents the adapter's "latest" version.
    public Byte AdapterVersion { get; set; }

    public VersionedBinaryAdapterBase(Byte adapterVersion) => AdapterVersion = adapterVersion;

    ///     Write the current Version / read the serialized version
    protected unsafe void WriteAdapterVersion(UnsafeAppendBuffer* writer) => writer->Add(AdapterVersion);
    protected unsafe Byte ReadAdapterVersion(UnsafeAppendBuffer.Reader* reader) => reader->ReadNext<Byte>();
    protected String GetVersionExceptionMessage(Byte version) => $"serialized version {version} no longer supported";
}

Next is an actual adapter implementation that calls the interface methods Serialize and Deserialize. This has the advantage that that serialization code now lives within the serialized data, making it closely tied to the data and easy to update without sifting through many more lines of adapter code.
Binary Adapter implementation

public class LinearDataMapChunkBinaryAdapter<TData> : VersionedBinaryAdapterBase,
    IBinaryAdapter<LinearDataMapChunk<TData>> where TData : unmanaged, IBinarySerializable
{
    private readonly Byte m_DataVersion;
    private readonly Allocator m_Allocator;

    private static unsafe void WriteChunkData(
        in BinarySerializationContext<LinearDataMapChunk<TData>> context, in UnsafeList<TData>.ReadOnly dataList)
    {
        var writer = context.Writer;
        var dataLength = dataList.Length;
        writer->Add(dataLength);

        foreach (var data in dataList)
            data.Serialize(writer);
    }

    private static unsafe UnsafeList<TData> ReadChunkData(
        in BinaryDeserializationContext<LinearDataMapChunk<TData>> context, Byte serializedDataVersion,
        Allocator allocator)
    {
        var reader = context.Reader;
        var dataLength = reader->ReadNext<Int32>();

        var list = UnsafeListExt.NewWithLength<TData>(dataLength, allocator);
        for (var i = 0; i < dataLength; i++)
        {
            var data = new TData();
            // TODO: avoid boxing!
            data.Deserialize(reader, serializedDataVersion);
            list[i] = data;
        }

        return list;
    }

    public LinearDataMapChunkBinaryAdapter(Byte adapterVersion, Byte dataVersion, Allocator allocator)
        : base(adapterVersion)
    {
        m_DataVersion = dataVersion;
        m_Allocator = allocator;
    }

    public unsafe void Serialize(in BinarySerializationContext<LinearDataMapChunk<TData>> context,
        LinearDataMapChunk<TData> chunk)
    {
        var writer = context.Writer;

        WriteAdapterVersion(writer);
        writer->Add(m_DataVersion);
        writer->Add(chunk.Size);
        WriteChunkData(context, chunk.Data);
    }

    public unsafe LinearDataMapChunk<TData> Deserialize(
        in BinaryDeserializationContext<LinearDataMapChunk<TData>> context)
    {
        var reader = context.Reader;

        var serializedAdapterVersion = ReadAdapterVersion(reader);
        if (serializedAdapterVersion == AdapterVersion)
        {
            var serializedDataVersion = reader->ReadNext<Byte>();
            var chunkSize = reader->ReadNext<ChunkSize>();
            var data = ReadChunkData(context, serializedDataVersion, m_Allocator);

            return new LinearDataMapChunk<TData>(chunkSize, data);
        }

        throw new SerializationVersionException(GetVersionExceptionMessage(serializedAdapterVersion));
    }
}

So the actual data class and its adapter are not generic, but the data class itself holds generic data - it just needs to implement the IBinarySerializable interface and even that is optional.

Finally, here are two instances of a struct that get serialized. I use this in tests to check that “loading an older version of binary data” works as expected.
Old and Current serializable data structs

public struct DataVersionOld : IBinarySerializable
{
    public Byte RemainsUnchanged0;
    public Int16 WillChangeTypeInVersion1;
    public Byte RemainsUnchanged1;
    public Int64 WillBeRemovedInVersion1;
    public Byte RemainsUnchanged2;

    public unsafe void Serialize(UnsafeAppendBuffer* writer)
    {
        writer->Add(RemainsUnchanged0);
        writer->Add(WillChangeTypeInVersion1);
        writer->Add(RemainsUnchanged1);
        writer->Add(WillBeRemovedInVersion1);
        writer->Add(RemainsUnchanged2);
    }

    public unsafe void Deserialize(UnsafeAppendBuffer.Reader* reader, Byte serializedDataVersion) =>
        throw new NotImplementedException();
}

public struct DataVersionCurrent : IBinarySerializable
{
    public const Double NewFieldInitialValue = 1.2345;

    public Byte RemainsUnchanged0;
    public Int64 WillChangeTypeInVersion1;
    public Byte RemainsUnchanged1;
    public Double NewFieldWithNonDefaultValue;
    public Byte RemainsUnchanged2;

    public unsafe void Serialize(UnsafeAppendBuffer* writer)
    {
        writer->Add(RemainsUnchanged0);
        writer->Add(WillChangeTypeInVersion1);
        writer->Add(RemainsUnchanged1);
        writer->Add(NewFieldWithNonDefaultValue);
        writer->Add(RemainsUnchanged2);
    }

    public unsafe void Deserialize(UnsafeAppendBuffer.Reader* reader, Byte serializedDataVersion)
    {
        switch (serializedDataVersion)
        {
            case 1:
                RemainsUnchanged0 = reader->ReadNext<Byte>();
                WillChangeTypeInVersion1 = reader->ReadNext<Int64>();
                RemainsUnchanged1 = reader->ReadNext<Byte>();
                NewFieldWithNonDefaultValue = reader->ReadNext<Double>();
                RemainsUnchanged2 = reader->ReadNext<Byte>();
                break;
            case 0:
                RemainsUnchanged0 = reader->ReadNext<Byte>();
                WillChangeTypeInVersion1 = reader->ReadNext<Int16>();
                RemainsUnchanged1 = reader->ReadNext<Byte>();
                reader->ReadNext<Int64>(); // skip bytes for: WillBeRemovedInVersion1
                RemainsUnchanged2 = reader->ReadNext<Byte>();

                // could also be a value computed from the other fields
                NewFieldWithNonDefaultValue = NewFieldInitialValue;
                break;

            default:
                throw new SerializationVersionException($"unhandled data version {serializedDataVersion}");
        }
    }
}

Note that you needn’t use new structs whenever you change the version, this is just for my unit tests. If you do change the version of the binary data, you do so in the existing data class and add another switch statement to handle each “version minus X” serialized data loading.

There are cases where you may end supporting very old versions, thus you’d remove the code for “versions older than the past four generations of serialized data”, and trying to load that data will then throw an exception.

Almost forgot, this is the “can I load serialized binary data of the older version?” unit test:
Unit test: load older binary data version

[Test] public void Deserialize_WhenLoadingPreviousVersion_DataCanBeDeserialized()
{
    var data0 = new DataVersionOld
    {
        RemainsUnchanged0 = 0xff,
        WillChangeTypeInVersion1 = 8,
        RemainsUnchanged1 = 0xff,
        WillBeRemovedInVersion1 = 9,
        RemainsUnchanged2 = 0xff,
    };

    using (var chunk = new LinearDataMapChunk<DataVersionOld>(new ChunkSize(1, 1, 1)))
    {
        chunk.SetData(LocalCoord.zero, data0);

        var adapterVersion0 = new List<IBinaryAdapter> {
            new LinearDataMapChunkBinaryAdapter<DataVersionOld>(TestAdapterVersion, 0, Allocator.Domain),
        };
        var bytes = Serialize.ToBinary(chunk, adapterVersion0);
        Debug.Log($"{bytes.Length} Bytes: {bytes.AsString()}");

        var adapterVersion1 = new List<IBinaryAdapter> {
            new LinearDataMapChunkBinaryAdapter<DataVersionCurrent>(TestAdapterVersion, 1, Allocator.Domain),
        };
      
        using (var chunk = Serialize.FromBinary<LinearDataMapChunk<DataVersionCurrent>>(bytes, adapterVersion1))
        {
            var data1 = chunk.GetWritableData()[0];

            Assert.That(data1.WillChangeTypeInVersion1, Is.EqualTo((Int64)data0.WillChangeTypeInVersion1));
            Assert.That(data1.NewFieldWithNonDefaultValue, Is.EqualTo(DataVersionCurrent.NewFieldInitialValue));
            Assert.That(data1.RemainsUnchanged0, Is.EqualTo(data0.RemainsUnchanged0));
            Assert.That(data1.RemainsUnchanged1, Is.EqualTo(data0.RemainsUnchanged1));
            Assert.That(data1.RemainsUnchanged2, Is.EqualTo(data0.RemainsUnchanged2));

            // see if we can serialize v1 correctly
            var bytes2 = Serialize.ToBinary(chunk, adapterVersion1);
            Debug.Log($"{bytes2.Length} Bytes: {bytes2.AsString()}");
          
            using (var chunk2 = Serialize.FromBinary<LinearDataMapChunk<DataVersionCurrent>>(bytes, adapterVersion1))
                Assert.That(chunk2.GetWritableData()[0], Is.EqualTo(data1));
        }
    }
}

Final note: the unit test relies on IEquatable<> implementation that I omitted from the two test structs. Just in case someone spots that the unit test isn’t actually comparing the fields - it does, just not in this example. :wink:

Hi @Canijo ,

Although the original implementer have moved on, I will relay your kind words.

I agree this would be needed, but it’s unlikely that this feature will be added in the short term. For the time being, it’s probably fine to make it a local package and make the change directly.

Same answer as the above, sadly. I believe the serialization context already contains that information in some way, but doesn’t expose it. It might be only for json adapters though.

Hi @CodeSmile ,

Can you elaborate further on this? I’ve actually heard the opposite feedback many times. :slight_smile:

Thank you!

Well, the issue with the Json serializer is that it seems to require you to know every detail of the Json format. It may be a case of bad documentation too, perhaps I took a wrong turn somewhere.

I ended up writing code that said: put a bracket here, then fill in content, then add another bracket, and of course make that a key/value thing, and so on … yeah, that’s stuff I can pretty much do myself with a StringBuilder and it’ll be more readable and understandable.

It just felt extremely low-level and confusing to the point where I could not make the simplest things work, and I was fighting heavily with the serializer throwing very confusing exceptions because it was expecting something different than I gave it and no matter how I tweaked it, it would still complain. For example an object that contains another object, or an object that contains an array with data in it. That basic stuff I just couldn’t make work after over 2 days, where the resulting json would have been maybe 5 lines.

Again, the docs for this are terrible but in contrast, the binary serialization was absolutely straightforward, almost natural, with even LESS documentation than the Json part.

If there’s some secret docs to this, let me know and I may give it another try.

The documentation of the package is lacking for sure. In most cases, you shouldn’t have to write the brackets, indents or whatnot manually. The “primitives” of the JsonWriter follow this, which tells the rules that must be followed. For example, to write a key-value pair, you must be inside of an object scope.

Usually, the thing I’ve seen that trips people using it is that they try to open a scope before nesting in adapters. For example opening an object scope, then calling context.SerializeValue(...) expecting that the adapter for that value will write it as key-value pair. Usually, this tends to break very easily. Again, here, the error messages are lacking.

Looking at the type definitions in the Getting Started section of the package documentation, I’ll iteratively implement adapters to make the output smaller. The basic output would be:

{
    "Name": "Bob",
    "Health": 100,
    "Position": {
        "x": 10,
        "y": 20
    },
    "Inventory": [
        {
            "Name": "Sword",
            "Type": 0
        },
        {
            "Name": "Shield"
            "Type": 1,
        },
        {
            "Name": "Health Potion"
            "Type": 2
        }
    ]
}

If you wanted to have the same output, but with different/shorter names, you could define a Player adapter like this:

public void Serialize(in JsonSerializationContext<Player> context, Player value)
{
    // Player will have multiple key-value fields, so we must open an object scope.
    using var objectScope = context.Writer.WriteObjectScope();

    // Most primitives can write key-value pairs directly.
    context.Writer.WriteKeyValue("N", value.Name);
    context.Writer.WriteKeyValue("H", value.Health);

    // Sub-objects can use the `SerializeValue` method.
    context.Writer.WriteKey("P");
    context.SerializeValue(value.Position);

    // Arrays can also use the `SerializeValue` method.
    context.Writer.WriteKey("I");
    context.SerializeValue(value.Inventory);
}

This would give:

{
    "N": "Bob",
    "H": 100,
    "P": {
        "x": 10,
        "y": 20
    },
    "I": [
        {
            "Name": "Sword",
            "Type": 0
        },
        {
            "Name": "Shield",
            "Type": 1
        },
        {
            "Name": "Health Potion",
            "Type": 2
        }
    ]
}

Next, I want to serialize the Position on a single line. I can define an adapter for the int2 type to serialize it as a value rather than an object:

public void Serialize(in JsonSerializationContext<int2> context, int2 value)
{
    // Serializes as a value.
    context.Writer.WriteValue($"{value.x}, {value.y}");

    // Serializes as an object. This would be equivalent to the default behaviour.
    // using var objectScope = context.Writer.WriteObjectScope();
    // context.Writer.WriteKeyValue("x", value.x);
    // context.Writer.WriteKeyValue("y", value.y);
}

Similarly, you could do the same thing for the Item:

public void Serialize(in JsonSerializationContext<Item> context, Item value)
{
    context.Writer.WriteValue($"{value.Name} - {value.Type}");
}

This would give:

{
    "N": "Bob",
    "H": 100,
    "P": "10, 20",
    "I": [
        "Sword - Weapon",
        "Shield - Armor",
        "Health Potion - Consumable"
    ]
}

Lastly, and this will make the output bigger, let’s say you wanted to manually write the Item array inside the Player adapter and bypass the adapters, you could replace:

// Arrays can use the `SerializeValue` method.
context.Writer.WriteKey("I");
context.SerializeValue(value.Inventory);

with this:

// Write the same array, but manually, by-passing the Item list and item adapters.
context.Writer.WriteKey("I");
using var arrayScope = context.Writer.WriteArrayScope();
for (var i = 0; i < value.Inventory.Length; ++i)
{
    using var arrayItemValueScope = context.Writer.WriteObjectScope();
    context.Writer.WriteKeyValue("Name", value.Inventory[i].Name);
    context.Writer.WriteKeyValue("Type", (int)value.Inventory[i].Type);
}

Which would give:

{
    "N": "Bob",
    "H": 100,
    "P": "10, 20",
    "I": [
        {
            "Name": "Sword",
            "Type": 0
        },
        {
            "Name": "Shield",
            "Type": 1
        },
        {
            "Name": "Health Potion",
            "Type": 2
        }
    ]
}

Hope this helps!

2 Likes

Thanks, much appreciated! :slight_smile:
If you can, link your post in the docs. That’ll be super helpful.
What is really confusing about this is that there’s no explanation on what constitutes an object or a scope and when to open/close them as you pointed out. I’m pretty sure I tripped over exactly this because I just could not get the nesting to work.

1 Like

Thank you for your detailed examples! You actually just tought me a few things =D

Thanks for the response, i understand, i still hope that if development gets picked-up again, this post might get visited =)

Im currently “re-designing the wheel” as a fun project where im making my own base “Object” class, that should mimic many of the features of a Unity Object, but with my custom perks, and many editor-only features.

This was before i noticed that you already had released Unity.Properties (that i didnt know anything about), and the Runtime Binding api. With those + Unity.Serialization and the Source Generators compatibility, im just astounded for the power we have available.

I hope we are getting to an age of “gosh how i LOVE working like this” :):):slight_smile:

I might leave some other suggestion / problems i find while using this.

By the way, classic serialization rules dont really apply with these package, and i believe its not propperly documented? I dont know about the Json part, but on Binary, you can actually serialize open generics, and any class is actually by reference, with the “inline” behaviour of classes with [SerializeField] not being respected (null can be serialized).

The open-generic thing is actually wonders for me =D (though probably breaks in AOT if i dont manually preserve those classes, but anyway is so cool)

Once again, thanks for the kind words!

Please do!

Correct, when using Unity.Properties, we will generate properties for both [SerializeField] and [SerializeReference], but they are both treated as polymorphic types when it’s a reference type.

On the classic serialization, you can still end up with a null value for a field with [SerializeField], but it gets patched once it’s “looked at”, which is a behaviour we didn’t want to have for Unity.Properties.

There is a small note in here, perhaps we should make it clearer.

As long as they are used somewhere, they should get included in the build. You can also for the generation of the property bag for a given type to ensure it is referenced.

Nonetheless, if i have say a class like:

[Serializable]
[GeneratePropertyBag]
public class Property<T>  : IProperty<T>
{
   [SerializeField]
   private T? _value;
}

And I only reference them via generics calls, or generic interfaces that dont even mention Property, but instead some IProperty that will end up looking the deserialized value. That should not get AOT generated right?.

This imaginary class would have been initialized somewhere through reflection via

var propertyType = typeof(Property<>).MakeGenericType(someValueType)
var property = Activator.CreateInstance(propertyType);
...

and then serialized, so concrete implementations are really never directly mentioned

I believe i should have some pipeline that at some point during the build or asset editing that can collect all serialized types that should be preserved, and create script like:

static class PreserveGenerics
{
  /// never actually called
  [UnityEngine.Scripting.Preserve]
  static void Preserve()
  {
     var p1 = new Property<Vector3>();
     var p2 = new Property<SomeCustomClass>();
      ...
   }
}

It might be easier to achieve this? SourceGenerator’s wont help here me because its not code-dependent generation, but “asset-dependent” ?

Im new to this “dealing with AOT” but i think i cannot get away without some explicit “Preserve” mechanism that is necesarily dependant on what is actually serialized.

For the property bag generation, I don’t think the [GeneratePropertyBag] will do much here, because we don’t generate property bags for open generic types. You will need to use the [GeneratePropertyBagForType(typeof(...))] and pass it a “closed” type.

But yeah, generally, if you only create instances through reflection and go through an interface, the types might not get preserved. Having the property bag generated will help, though.

Ohh perfect, so i could probably get away with it replacing the “PreserveGenerics” class with just

[assembly: GeneratePropertyBagForType(typeof(Property<Vector3>)]
[assembly: GeneratePropertyBagForType(typeof(Property<SomeCustomClass>)]
...

as the PropertyBag will already cause it to be preserved =D

ty!

Is it by design that JsonSerialization cannot be called from a MonoBehaviour’s “OnAfterDeserialize” ? Im getting a “System out of memory” exception from the ReadJob.Run() inside JsonSerialization.FromJson(..) on the Editor.

It only happens on AssemblyReload or EnterPlayMode (with assembly reloads), any other call to OnAfterDeserialize works. And moving it to “Awake()” works.

(I removed any custom code and only call JsonSerializatiion.FromJson with a non null, valid json string, and it doesnt seem to even be able to get to any IJsonAdapter, it crashes before)

Edit: it really happens with anything, like

        public void OnAfterDeserialize()
        {
            string json = JsonSerialization.ToJson(5, default);
            int value = JsonSerialization.FromJson<int>(json, default); /// crash
        }

So, in case someone gets the same problem. The crash is happening due to “something” related to Jobs not being prepared right after an AssemblyReload (serialization code appears to run before anything else when the reload happens, even before than any [InitializeOnLoadAttribute].

There is a method overload for JsonSerialization.Fromjson that bypasses the call to ReadJob.Run() by providing a SerializedValueView.

Using that method fixes it for me, like so:

static unsafe T FixedDeserialize<T>(string json, JsonSerializationParameters parameters = default)
{
   fixed (char* buffer = json)
   {
      using var reader = new SerializedObjectReader(buffer, json.Length, GetDefaultConfigurationForString(json, parameters));
      reader.Read(out var view);
      return JsonSerialization.FromJson<T>(view, parameters);
    }
}

/// copied from internal method in JsonSerialization
static SerializedObjectReaderConfiguration GetDefaultConfigurationForString(string json, JsonSerializationParameters parameters = default)
{
     var configuration = SerializedObjectReaderConfiguration.Default;

     configuration.UseReadAsync = false;
     configuration.ValidationType = parameters.DisableValidation ? JsonValidationType.None : parameters.Simplified ? JsonValidationType.Simple : JsonValidationType.Standard;
     configuration.BlockBufferSize = math.max(json.Length * sizeof(char), 16);
     configuration.TokenBufferSize = math.max(json.Length / 2, 16);
     configuration.OutputBufferSize = math.max(json.Length * sizeof(char), 16);
     configuration.StripStringEscapeCharacters = parameters.StringEscapeHandling;

     return configuration;
}
1 Like

Hi @Canijo , I’m glad you were able to find a way to resolve the issue. I’m not aware of anything specifically in the serialization package that shouldn’t work in that call. It uses a lot of jobs, though.

I’ve tried running simple jobs from that method and I’ve gotten crashes on domain reload. So I think, as you said, that something is not ready in Jobs.

1 Like

Well, as long as we have a temporary workaround, its no biggie =).

Edit: Also, running the original method on a Player build without the fix is not a problem, because it is actually ready, its just a problem on the Editor because of the order of things happening on the AssemblyReloads. I’ve actualy headbanged with this particullar issue a number of times: there is no way to run anything before deserialization happens for UnityObjects that were active during the Reload. Its annoying because it forces me to lazy-initialize some static classes whenever they are called, so they are “ready” when and if Serialization needs them, but wont always let me fully initialize because, even if on editor things are actually single-threaded, the Editor still prevents accesing some Unity APIs as it assumes you might be on the loading-thread (deserialization). So i end up needing a “double initialization” where the non-Unity side can lazy initialize, but the side that touches Unity needs to wait until [InitializeOnLoadMethod]. But thats a problem for another day xD

Posting some things that i’ve found while using this, in case this get re-visited.

  • Exceptions thrown during JsonSerialization, are wrapped into a DeserializationEvent that later throws. While i think this can be helpful, the fact that is being re-thrown makes us lose the stack trace and any meaningful information. I believe we could save some stressed programmers some time if all exceptions are captured into a ExceptionDispatchInfo so the StackTrace is preserved: later they can still get grouped into an AggregateException or directly throw them if there is only one of them through ExceptionDispatchInfo.Throw(), which would preserve all that beautiful stack trace.
  • Some Validation errors concerning ObjectScope or ArrayScope could be more easily debugged if you dump the incomplete generated json that failed to be propperly produced. As this validation errors are normally thrown outside of the code that generated the problem, whenever scopes are getting disposed and the writer sees that its missing some closures. Being able to see the incomplete json i think makes it easier to deduce where we actually started producing errors.
  • IContravariantAdapter could avoid boxing its Des/SerializationContext if the method accepted the context as a generic with a struct & interface constraint, mantaining the benefits of a readonly struct passed through the “in” keyword (not true until .NET is updated and has readonly struct constraint soz xD), like:
public interface IContravariantBinaryAdapter<in TValue> : IBinaryAdapter   
{
    void Serialize<TContext>(in TContext context, TValue value) 
            where TContext : struct, IBinarySerializationContext;

    object Deserialize<TContext>(in TContext context) 
             where TContext : struct, IBinaryDeserializationContext;
}
  • Deserializing into an existing instance is actually implemented but not publicly exposed. Im not refering to JsonSerialization.FromJsonOverride, but rather to the DeserializeValue<T>() methods in both Binary & Json contexts. This will allow us to deserialize into either readonly reference fields or fields that are initialized through the object constructor, or just plainly overriding some object (imagine a Undo-like system, this would just be perfect), just like you do inside the visitor. Without it 's just too cumberstone to implement and all the code is actually already implemented on your side.
/// existing method
public T DeserializeValue<T>()
{
    var value = default(T);
    m_Visitor.ReadValue(ref value);
    return value;
}

/// My desired method overload
public void DeserializeValue<T>(ref T value)
{
    m_Visitor.ReadValue(ref value);
}
  • Adapters for types that we want to be able to serialize as references, cannot be state-less, and thus can never be Global. What i mean by this is that there is no way to share data between adapters for a single serialization process without previously configuring them. For any adapter that wants to serialize a value as a reference, will probably be implemented by embedding the instance data the first time it is written, and referencing as an ID in further writes within the same serialization process, just as you are doing with your SerializedReferences inside your visitor. This is not really a blocking problem as you can make your adapters just have a Prepare & Finish methods passing some State object that is used to store shared data, and then clearing it on Finish. But it would be ideal if we could just pass an “object” to the serialization entry point (like you do with Migrations for Json), that is later accesible from within the serialization/deserialization context, or maybe even be able to store “key-value” pairs, we then wouldnt need to have any local state in adapters, as we could store inside the context, and thus some custom adapters could just be written as Global without needing to configure them prior to serializing. Or even if not global, they could just be included and not have to configure anything per serialization call. I hope that i’ve managed to explain what i mean.
  • JsonSerializationState can be used to share references between different serialization calls. But the binary equivalent is internal, im not personally using it but i believe is a typo?.

Loving this package :):slight_smile:

Also, about validation errors. In the “FromJson” method, errors are inserted into the DeserializationEvents and thus we have a way get notified if multiple errors happen. But in the “ToJson”, if validation errors from ObjectScope / ArrayScope are thrown, as they are thrown from within the “Dispose” method, they actually steal any exception that might have triggered that Dispose, completely hiding the underlying problem, making it hard to debug and forcing to go step by step until we can catch the actual exception that we are interested about. It’d be nice if those scopes would not throw on Dispose, and instead just log the error and prevent further writing in some other manner. So the real exceptions can actually get thrown.

  1. This is really helpful, but I still don’t know how to implement deserialization. Could you provide some examples of deserialization?

  2. The documentation doesn’t provide details on how generics and collection types are handled/supported. Could you briefly introduce that?

  3. I seem to find a bug. The following code can execute successfully:

Dictionary<string, float> d = new() { { "Key", 1f } };
Debug.Log(JsonSerialization.ToJson(d));

But if you change 1f to float.PositiveInfinity, the following error occurs:
“InvalidOperationException: WriteValue can only be called as a root element, array element, or after WriteKey.”

  1. Please make BinarySerialization support System.IO.Stream. Many existing codes are based on this type and its derived types, not on UnsafeAppendBuffer. Currently, extra transfer operations are necessary between them. Additionally, it seems that the FromBinaryOverride method is missing.