Hi, I’m trying to reduce the dependencies of subsystems in my code by the use of interfaces. Is this classic arrengement:
costumer (uses interfaces instead of classes) => [interfaces] <= services(implement a given interface)
However, I’m having a hard time doing this, since in Unity, most dependcies come from public fields assigned via the inspector. So, in order to use a system, I’d have to assign an interface in the inspector, however, as you might know, that’s not possible in unity.
Should I try to bypass this restriction and allow interfaces to be publicly assign, or should I use some other scheem?
Unity can not serialize reference to interface types. However I made this SerializableInterface class. Internally it simply stores a UnityEngine.Object reference but the property drawer will take care to only accept instances which implement the given interface. It also works with both, MonoBehaviours and ScriptableObjects.
When you drag a gameobject onto the field and it has several components that implement this interface, you will get a context menu to select which one you want to assign. Though you can always just drag the actual component instance by dragging the header as usual.
Unity can serialize plain C# types implementing an interface via SerializeReference, though doesn’t have built in support for assigning said references without your own custom inspector work. Tools like Odin Inspector + Serializer can reference Unity objects via interfaces, with the caveat that it’s unstable with prefabs.
Otherwise various tools exist to allow you to do this. It’s a problem the community has solved many times over.
That’s true, though it’s worth to point out that it does not support serialization of UnityEngine.Object derived type references which was the main idea here I think. Just linking up components like usual, but don’t have them directly depend on each other but have an interface as common ground.
I went with bypassing the restriction personally. I just find it really useful in some cases to be able to assign any object that implements an interface, rather than being only able to drag in objects that derive from a specific base class.
There are four main strategies, from what I’ve seen, that have been used to overcome the limitation:
1. Serialization via Base Class
-
Implement custom serialization logic capable of serializing interface types.
-
Implement a custom editor that enables assigning values to the interface type fields via the Inspector.
-
Have the client derive from a base class that handles the serialization via ISerializationCallbackReceiver.
class Client : SerializedMonoBehaviour
{
[OdinSerialize] IService service;
}
Used by Odin Inspector and Serializer.
2. Serialization via Wrapper
- Implement a generic wrapper class capable of serializing interface type objects.
- Implement a custom property drawer for the wrapper that enables assigning values via the Inspector.
- Wrap each interface type field in the client with the wrapper class.
class Client : MonoBehaviour
{
[SerializeField] SerializableInterface<IService> service;
IService Service => service.Value;
}
Used by SerializableInterface.
3. Separate Initializer Composer
- Create a separate component responsible for resolving the interface type fields’ values and injecting them to the client.
- The composer can internally use method #1 or #2 to enable assigning values via the Inspector and serializing them.
class ClientInitializer : Initializer<Client, IService> { }
class Client : MonoBehaviour, IInitializable<IService>
{
IService service;
public void Init(IService service) => this.service = service;
}
Used by Init(args) (created by me), Zenject.
4. Source Generators
- Create a source generator that adds code to partial classes that takes care of serializing interface type fields.
- Use a custom editor or property drawer to enable assigning values to the fields via the Inspector.
partial class Client : MonoBehaviour
{
[SerializeInterface] IService service;
Used by [SerializeInterface] .
In lieu of using actual interfaces, it’s also possible to implement the facade pattern / adapter pattern to enable specifying different implementations that don’t need to derive from any particular base class:
abstract class Command : MonoBehaviour
{
public abstract void Execute();
}
sealed class ScriptableObjectCommandAdapter : Command
{
[SerializeField] ScriptableObjectCommand command;
public override void Execute() => command.Execute();
}
abstract class PlainClassCommandAdapter<TCommand> : Command where TCommand : ICommand
{
[SerializeField] TCommand command;
public override void Execute() => command.Execute();
}
Your original message included this statement:
I just want to warn of creating many small assemblies. This can quickly skyrocket your domain reload times if you have several dozen if not hundreds of asmdefs. It will also make managing dependencies a pain.
For example, if you find yourself adding the same ten “subsystem” dependencies to most other assemblies, that would indicate that those ten subsystems should be in a single assembly.
Thanks, actually I didn’t went through that path for the reasons you discussed.
Mostly, I’m just using namespaces to keep track of the dependencies (not really for naming conflicts but actually to separate things, and be warned if I need access to another territory). Then later when they’re more finished I could move them into different assemblies.
In the case of interfaces I’m using them for the main subsystems and they have proven useful. The only thing is that, now and then, I hit a wall when following the flow of the code, but that’s rearly the case. When it is, it makes me question my architecture, wich is good.
I do find that decoupling in a game is a bit difficult though, because of the interwined nature of the system. Say, that my character get’s hurt by someone. So, there’s the function handling the attack that needs to call the gettingHurt method. When I started working on this, everything was happening inside the animation. Things need to be called from an animation for timing. So, inside that coroutine, there’s the camera system acting, maybe the timeline if there’s one, the Stat system to lower the HP at the right time (and the moral too), the sound, maybe some IKs, maybe some ragdoll, the particle emissions, decals of blood, all considering the way the character got hurt, the storing of the information of the event in the history, the elements to warn the player, etc. And, as I said, all happening inside the animation… wich doesn’t seem like the place were you want your logic to be handled. So I ended up passing these as functions. That way the animation is just there to figure how to time the events but doesn’t need to know what function or system you’re using for that, and I could make that decisions in a higher hierarchy component. At least, that’s the case for functons not related to the animation, like recalling the event and lowering the stats. While it probably makes sense that the component for character animation comunicates with things like IKs, ragdoll, particles, camera and timeline.
Some things are super easy to decouple, but some are really linked to almost everything, and sometimes going back and forth doesn’t seem to make sense.
The Character is one of those things, the Level is another.
Should be the other way around unless you have very, very special requirements where the animation drive the logic (but generally that’s a bad idea)!
The other way round would be to speed up or slow down an animation to match the timing dictated by game design.
Which tells you that the system isn’t well designed.
Input - Processing - Output. Those are the three main separations and the flow of processing, with processing having a design layer which is just Data (both persistent and runtime-modifiable).
One of the things that’s making it hard for devs to think in separation of concerns is Unity’s nature of driving developers to put components on too many game objects, rather than doing more of what Entities tells you to do: process all common things in a central system.
I have a class Projectiles which spawns, moves, raycasts, despawns, calls impact events for ALL projectiles, both players’s and enemies’. It also maintains each projectile’s data eg what type, how much damage, who fired it, what state it is in. The projectile itself is just a transform and a sphere - no collider, no rigidbody, no scripts.
All of that driven by the animation coroutine?
Nope.
The animation or anything visual for that matter is an afterthought. It has a place at the very end of the chain of logic.
It all starts with input, which drives the character controller, which affects game logic, which drives the animation, which … does little to nothing. The animator just gets a bunch of parameters to do its thing and thats it. At most, animation events may drive automated non-gameplay events, most commonly footstep sounds or spawning a vfx at a specific timeframe.
The camera should be influenced only by the character controller respectively the character’s position, rotation and state. Camera should not depend on animation. Ragdoll is just what happens when the Animator gets disabled.
Well, with all due respect I don’t think you understood what I was saying.
The reason why all this things are called within the animation is timing. By animation, I refer to a coroutine. I think this is what generated the confusion. I think of a coroutine as an animation, since it defines timing and order of events. My bad for miseusing that word. With that said, what I was getting at, is that I don’t want to have my logic in a coroutine, since it should be independent of the timing of events. I do call that coroutine RecieveDamageAnimation() though.
So, the reason why all has to happen in a coroutine is because, I don’t want my character to lower a stat at any given moment but actually when the actual event is happening, and the actual event of being hurt happens after some other events have completed. For example, first I need to wait till the camera is at the right place, then I can lower the stat, wich will trigger the GUI response to give input to the player, etc. The sound for example, has to be a bit after, etc., when everything ended the camera will return to normal.
But the logic of lowering the stat, wich is actually what creates the real change in the state of the game, gets to be called by this coroutine. What I was talking is that by passing the logic elements as function parameters I was able to decouple this elements. I didn’t did this at first wich was messy, I think, and that was what I was commenting.
Decoupling is not hard because of bad architecture, I think, but rather, good architecture is hard because it puts you in the problem of how to properly separate the concrens, and some systems have lots of things you have to figure out. I guess this could be a trivial topic to you, but to me is a bit a hard, hopefully some day I’ll get to your level.
When you say that the camera should be influenced by the character controler and state, I don’t really understood what you meant. Obviously the camera is controlled by a huge number of factors other than those, and it doesn’t even target the player in several situations. In this particular case, what I meant was that if I want to make a close up of the characters chest (say if the wound was there), then the coroutine will tell the camera handling system -at the right time!- to make that shot. I didn’t meant that it controls the camera logic, but rather that is the one that request the camera controlling subsystem for that. The position of the camera is also determined by the place of the injury and the origen of the attack, etc. So, yeah, I’m sorry if I spoke lightly of this topics and wasted your time. Anyway, thanks for your answer.
Yeah but that is not “animation”. Glad we cleared this up.
I very rarely use coroutines and when I do, it’s seldom more than “wait, then invoke or destroy”.
Generally almost everyone will advice against using coroutines to “animate” or “drive” logic or use them for timing anything. Eg my projectile timeouts are simply a float with a future time that, if Time.time is greater or equal, has elapsed. This embeds well into Update code rather than having this run separately on another level which requires synchronization with no multithreading benefits. If I had to synchronize two methods running independently at the least I would want them to run in parallel. Alas, coroutines don’t offer that, and checking timing in Update is so trivial that it does not compete with “yield return new WaitForWhatever()”.
In short: coroutines are overused, and quite often misused.
See, and that’s where you’re shooting yourself in the foot by using coroutines.
Refactor one coroutine-heavy script to not use coroutines at all and you may be surprised how much easier writing and understanding the code becomes.