Object reference is null after deserialization?

I try to optimize an object set in my project using the ISerializationCallbackReceiver to transform a list into a dictionary. However, some values, keys to be explicit, are null after deserialization despite not being null in the inspector. Here is a simplified setup of mine:

public class MyKeyClass : ScriptableObject { /*...*/ }

// instances of MyValueObject are instances of derived classes of MyValueObject.
[Serializable]
public abstract class MyValueObject : ScriptableObject {
    public abstract MyKeyObject Key { get; }
}

My set looks like this:

public class ObjectSet : ScriptableObject, ISerializationCallbackReceiver
{
    [SerializeField]
    private List<MyValueObject> objects= new List<MyValueObject>();

    [SerializeField]
    private Dictionary<MyKeyObject, MyValueObject> indexedObjects = null;

    public void OnAfterDeserialize()
    {
        indexedObjects = new Dictionary<Feature, BaseConstraint>();

        MyValueObject item;
        for (int index = 0; index < objects.Count; index++)
        {
            item = objects[index];
            if (item != null)
            {
                if (!indexedObjects.ContainsKey(item.Key))
                {
                    indexedObjects.Add(item.Key, item);
                }
            }
        }
    }

    public void OnBeforeSerialize()
    {
        constraints.Clear();
        for (int index = 0; index < indexedObjects.Count; index++)
        {
            constraints.Add(indexedObjects.ElementAt(index).Value);
        }

        /* WORKAROUND: This always adds a null item to the list in the inspector to allow adding more values,
         * however, it is not included in the dictionary after deserialization.
         */
        constraints.Add(null);
    }

However, in the editor I get the following error code in the console:

Why are the referenced keys, accessed through the property from the abstract base class, of my objects of type MyValueObject from my list null, whereas they show properly in the inspector, once I try to reference and index them into my dictionary after deserialization? They look fine in the inspector and my project files.

In other words, I have a collection of ScriptableObjects which each have a reference to another ScriptableObject of a different type, which can be accessed through a property, but doing so returns a null value during deserialization.

How are they being serialized? If it’s anything that Unity is in charge of (serialized prefabs/scenes, JsonUtility, etc), none of Unity’s serialization is any good at serializing polymorphic data types; I believe it just serializes them as the base class and nothing more. I suspect that you have a class derived from MyValueObject which has a meaningful “Key” member, but Unity’s serialization doesn’t know that it’s one of those and so it just fails.

Yeah, it’s really annoying and yeah it’s definitely a Unity bug. If it helps, it does seem to be on the roadmap (ctrl-F “serialization” on this page) for 2019.3 - you may want to try installing the 2019.3 beta if this kind of serialization is vital.

I second StarManta. Your could try to hack something that allows you to cast to your implementation class before accessing the key property, so that Unity knows how to resolve the thing. At least try this to verify whether this is actually the problem.
Unity really has a few pit falls when it comes to inheritance, interfaces and generics.

All of my derived classes from MyValueObject look like this:

[Serializable]
public class MySpecialValueObject : MyValueObject {
    [SerializeField]
    private MyKeyObject key;
    public override MyKeyObject Key { get { return key; } }
}

It is so weird that Unity serializes this property, appearently, but fails to fetch it after deserialization, returning null. I know that it does serialize it because I access this property in some custom editor and editor windows without problems, but not when trying to access it via ISerializationCallbackReceiver in OnAfterDeserialize().

I am going to try to make it a non-abstract property in the base class, maybe it doesn’t fail then because it isn’t a property of derived classes, but whatever, just crossing my fingers here.

Seriously, I am so sick of it. I study computer science in college and I used to adore Unity as an engine. That was until I learned about inheritance, polymorphism, interfaces and generics in C-Sharp and how Unity doesn’t support anything of that in their engine. Back in 2018, I worked on some smaller projects in my freetime, just tinkering a bit around and I kept running into issues like this which ultimately made me literally ragequit Unity.

I tried the Unreal Engine 4 for one college project with some success, however, there one has the huge disadvantage of not having a lot of source material to look stuff up and learn how to use the components that the engine gives you. Editor scripting for instance is rather hard and very inconvenient to use in the Unreal Engine 4, from what I learned.

In the end, I am back to the Unity engine now, but I don’t think that I am going to use it much longer (have to because work place uses it) considering the many many problems that Unity has with such basic features like object serialization and component decoupling. It gives me a taste as if Unity aims to lure people into using the engine who just want some quick success, but people who try to do things properly and design a decent architecture are left alone and somewhat ignored. I mean, how long does the object serialization issue exist? 2014, maybe even 2010 or 2009? Even considering that Unity doesn’t use C-Sharp for that long, it is embarrassing that such a basic feature is so neglected even after such a long time, albeit being the very backbone of an overwhelmingly large collection of engine features.

I am well aware that there are third party tools that enable you to use a decent serialization and inspector replacement for the Unity editor, however, for such a basic feature being object serialization that literally any game engine should support, this is unacceptable. A procedurally generating spline mesh tool? Sure, I look it up in the store. Something basic like serializing generics and and being able to reference objects that implement a certain interface? Ugh…

1 Like

I am with you, it is a bummer. But…

Back in the day, Unity decided to go for the Component approach with a service locator pattern. The idea was to have components instead of interfaces, inheritance and generics, so these concepts are kinda second class citizens. They could surely have improved on this. It goes back to a time, though, where you probably did not find many game devs with a demand for it.

This keeps me psyched about DOTS (the new ECS approach). So far, the overall support is not quite there yet and it is a bit of work to get going with it. But it is much more modern and performant than GameObjects and Components. So maybe, instead of improving on the old construct, this is the way to go for devs that want a ‘proper’ code architecture. Maybe this is for you :slight_smile:

Please excuse, I glanced over this earlier without going into it, with my blood still kinda boiling from running into my issue (the whole situation is a very time-sensitive matter for me, for multiple reasons). Do you mean something like this?

// objects[index] is MyValueObject
item = objects[index] as MySpecialValueObject;
if (item != null) /* do stuff with item.Key, which should not be null */

Thanks for reminding me of DOTS, I am quite excited for it. From what I have read about it so far, it is exactly how I try to program my games, so it might be indeed what I am looking for. Let’s cross fingers that is turns out to be a way to create a code structure with a stronger design in mind, away from the Component approach.

While I understand the huge advantage of easily working with Component pattern in the editor, I am somewhat surprised that it used to be or still is so popular, because the more I learn about it, the less fond I am with it, considering the many disadvantages it has at runtime. This might also be the reason why many people see Unity as a slow and tedious engine when it comes to performance. On Steam, I saw people literally refusing to play Unity games because their performance were bad if they played the games.

I think not being able to optimize on engine level, in difference to for instance the Unreal Engine 4, takes a whole layer of optimization away. Even something like ISerializationCallbackReceiver is actually more a workaround than a solution, because it has to be performed every time an object is deserialized after instantiation, appearently, whereas an object should be serialized in a way that doesn’t take away additional performance for deserialization.

Yeah, exactly. If that helps, you know it is something with deserialization not getting along with your level of abstraction. Not that it would be a solution, then.

Well, if you’ve seen book of the dead or project tiny, it don’t think this makes a lot of sense. I think you can build a game with shitty performance in any engine. Those people probably think of games that start with the default splash screen. Might not be the best ad for Unity to have those games represent the engine most visibly.

God, I hate that. It’s so incredibly wrongheaded. As olejuer says, rookies can make badly-performing games in any engine. Literally the only reason that Unity is associated with bad performance is because the splash screen only shows up on games made with the free version, which obviously is going to be the most amateur games.

One of the most egregious examples is Kerbal Space Program. KSP was written in Unity, and has some performance problems. When KSP 2 was announced, many people on the KSP forums said that they hoped that it was not using Unity (the devs confirmed it was). The people moaned that performance was going to suck, because it was Unity, and said they weren’t interested in buying KSP 2.

Now here’s the hilarious part: They said that if you wanted to have good performance on a space/physics game, that they’d have to use the engine that SimpleRockets 2 uses, since that game (which is very similar to KSP) is silky smooth.

Three guesses what engine SR2 uses?

Okay, did this, same outcome, despite explicitely casting to MySpecialValueObject. I added instances of a type MySpecialValueObject which derives from MyValueObject to my set. After changes in the code and going back to Unity, I got the same error and my set was empty again.

It behaves quite odd though. If I add objects of a type that derives from MyValueObject to my set, at first nothing happens. I can save my project and it also detects changes in my source control for that set asset. If, and only if, I make changes to my code and Unity recompiles (?) the project, then it throws my changes out of the window and gives me that error message.

Also, interestingly, it doesn’t do this for every item in every set. There is one older set and there are two items of type MySpecialValueObject that remain in the set, even after making changes to the code. Adding these very items to another set, however, throws them out once I make changes in the code, so no clue what the logic is behind that.

I agree with you guys. That dude got called out there, too, with users bringing similar arguments and examples as yours. I do believe that there is a little grain of truth somewhere, with Unity often having performance problems, but it would be too easy to say that it is the engine’s fault-

The issue that I described in my original opening post appears to be specific to Windows. My own computer has Windows 10 as operating system, while my work place has macOS and there the issue is simply not present.

I solved my problem by introducing a nested class representing the key-value-relationship I have between my MyKeyObject and MyValueObject:

    [Serializable]
    public class KeyConstraintPackage
    {
        public MyKeyObject Key;
        public MyValueObject Value;

        public KeyConstraintPackage(MyKeyObject key, MyValueObject value)
        {
            Key = key;
            Value = value;
        }
    }

As well as updating my interface methods to:

    [SerializeField]
    private List<KeyConstraintPackage> constraints = new List<KeyConstraintPackage>();

    [SerializeField]
    private Dictionary<MyKeyObject, MyValueObject> indexedConstraints = null;

    public void OnAfterDeserialize()
    {
        indexedConstraints = new Dictionary<MyKeyObject, MyValueObject>();
        KeyConstraintPackage item;
        for (int index = 0; index < constraints.Count; index++)
        {
            item = constraints[index];
            if (item != null && item.Key != null && item.Constraint != null)
            {
                if (!indexedConstraints.ContainsKey(item.Key))
                {
                    indexedConstraints.Add(item.Key, item.Constraint);
                }
                #if UNITY_EDITOR
                else
                {
                    Debug.LogWarning("/.../.");
                }
                #endif
            }
            #if UNITY_EDITOR
            else
            {
                Debug.LogWarning($"An item in the set is null, has no valid key or no valid value configurated.");
            }
            #endif
        }
    }

    public void OnBeforeSerialize()
    {
        if (indexedConstraints != null)
        {
            indexedConstraints.Clear();
            indexedConstraints = null;
        }
    }

This appears to work, as I don’t get errror messages or my own warning messages anymore.

PS: Is there any reason why one would implement OnBeforeSerialize() properly if one has only a one-directional relationship, here my dictionary being created from my list, but not vice versa?

Seems like a more natural way to go about this, anyway. At least to me. I don’t understand why this would work, while your first approach wouldn’t. Pretty weird.
I don’t think there is any reason to implement OnBeforeSerialize other than keeping the list and dictionary in sync. I like to create the list from the dictionary in OnBeforeSerialize so that the lists only purpose is serialization. That way I wouldn’t have to think about modifying both, list and dictionary, everywhere.

It gets even weirder. On Windows 10, I don’t get any error message or one of my warnings, but whenever I open the project on macOS I get my warning "An item in the set is null, has no valid key or no valid value configurated.".

I agree that it is more natural, just a shame because I already know the key from the value object anyways, so now i have to explicitely set it instead of reading it out.

If there is no downside to that, I am going to do that as well, thanks. A little bit strange that the force you to implement both nonetheless, wish they would split this interface into two, one for OnBeforeSerialize and one for the other.