Tips for writing cleaner code that scales in Unity Part 3: Adding a modular interaction system using interfaces

Welcome to the third post in our five-part article series on how to create cleaner code in Unity projects. Our aim is to provide you with guidance on how to apply general object-oriented programming principles to make your code more readable, well-organized, and maintainable.

We teamed up with Peter from Sunny Valley Studio, who’s created a long list of great YouTube tutorials and courses on the topic. Peter helped to write the articles in the series and created a project you can follow along with.

Here are the other posts in this series:
Article 1: Adding an AI-controlled NPC to a game
Article 2: Adding a jump mechanic using the state pattern
Article 4: Composition and inheritance
Article 5: Using interfaces to build extendable systems in your game


In this article series, we’ll expand the Unity Starter Assets - Third Person Character Controller package, with a number of new features to illustrate how to apply SOLID principles and design patterns to create scalable and robust game mechanics.

In this third article of the series, we’ll create an interaction system that enables our player character to pick up items and interact with the environment. To achieve this, we’ll use interfaces to handle different types of interactions in a consistent way. By defining a shared interface for interactable objects, we can make a system that’s both modular and easy to maintain.

You can follow along with this article by downloading a starter project from Github. In the Project tab go to the _Scripts > Article 3 > Start folder and open the scene.

The Interaction system created for the Unity articles series on tips for writing cleaner code that scalesis article
The Interaction system that we will be creating in this article

If you have read the previous articles from this series you know that we are using a State pattern implementation to be able to easily extend the behaviors of our player avatar. We have already implemented the InteractState class, appropriate animation, and the left mouse button (LMB) input as a trigger for our Interact logic. You can find the code that we have so far on Github. We plan to extend the existing code by using an interface to create a scalable solution for this.

The benefits of using interfaces

Before we dive into the solution, let’s quickly recap what interfaces are and the use cases for them.

In object-oriented programming (OOP), an interface is like a contract. It defines a set of methods, properties, events, or indexers that any class or struct must implement if they choose to use the interface. For example, if a class implements the IInteractable interface, it ensures that the class will have an Interact() method. This consistency makes it easier to manage interactions across different classes.

Interfaces also support polymorphism, a key principle of OOP. Polymorphism allows objects from different classes to be treated the same way, as long as they share a common superclass or interface. This makes your code more flexible and reusable.

In the context of our project that means our Agent, through the InteractState, can interact with various types of interactable objects without needing to know the specifics of each interaction.

What is polymorphism?

Polymorphism is a key concept in OOP that lets objects of different types be treated as if they belong to the same parent type. In C# this is achieved through interfaces or abstract classes, which can represent multiple underlying data types.

For example, with polymorphism, the Agent can call the Interact method on any object that implements the IInteractable interface, regardless of whether it is a PickUpInteractable or a SwitchInteractable. This makes our code more modular, extensible, and easier to maintain.

Read more about Polymorphism at Wikipedia.

Creating a modular interaction system using interfaces

The main concern when creating this kind of system is not just how to add more features to our project but how to make it easy to add more interactions later.

To achieve this, we will define an abstract interface such as IInteractable:


public interface IInteractable
    {
        void Interact(GameObject interactor);
    }

This will allow us to integrate this system with our existing state pattern:


Agent creates and manages State objects which in turn use the ITransitionRule interface. Agent also creates the ITransitionRule but doesn’t manage it. MovementState and InteractState all inherit from the State abstract class. InteractState uses the InteractionDetector which in turn uses the IInteractable interface which both the SwitchInteractable and the WeaponPickUpInteractable implements.

The above diagram shows that we have an InteractState that makes our character play the correct animation and stops the character from moving (script available on Github). It will ask a new object called InteractionDetector if it has detected an IInteractable object in front of the player. This approach applies the single-responsibility principle by keeping the detection logic in a new object. The crucial part is that the InteractState will access objects of type IInteractable, which will only have a single method Interact(GameObject interactor) without concerning itself with the specifics of each interaction.

Why pass GameObject as argument?

You might wonder why we pass a GameObject as an argument to the Interact method. This design choice provides several benefits:

  • Flexibility: Passing the GameObject of the interactor (e.g., the player or another entity) allows the interactable object to access any of its components, such as an inventory system, status effects, or other relevant data.
  • Decoupling: This approach keeps the IInteractable interface simple and focused on defining interactions, without depending on specific types or classes. Decoupling improves the reusability and scalability of the interaction system.
  • Contextual Interaction: Passing the GameObject allows the interactable object to affect it. This way when we add a new IInteractable object like a locked door it can try accessing the inventory object on the Player and check if we have a key.

This approach obviously has its downsides – the IInteractable object needs to assume that by calling GetComponent<Inventory>() we are going to be able to access the inventory object. We could instead pass to it the Agent script directly and make it know about all the components that the agent has. It’s up to you to decide which solution you prefer.

Interacting with a Switch GameObject

Let’s implement the SwitchInteractable class first:

public class SwitchInteractable : MonoBehaviour, IInteractable
    {
        private bool m_isSwitched = false;
        [SerializeField] private Animator m_animator;
        [SerializeField] private string m_animationTriggerName = "Activate";

        private void Awake()
        {
            m_animator = GetComponent<Animator>();
        }

        public void Interact(GameObject interactor)
        {
            if (m_isSwitched == true)
                return;
            m_isSwitched = true;
            m_animator.SetTrigger(m_animationTriggerName);
        }
    }

In this example our class implements IInteractable interface and it will trigger an animation on an object and set a bool flag m_isSwitched to true making it impossible to interact with this object again.

We now need to add this component onto the Switch object inside our scene. Notice that it also has an “Interactive” layer assigned. We will use that later for detecting those interactive objects.


Our Switch object will have a SwitchInteractable component and the Highlight component. It also needs a trigger BoxCollider and to be on the “Interactive” layer so that we can detect it when raycasting.

A script called Highlight will allow us to add an outline around the interactable object using the Render Objects Renderer Feature in URP. You can find the custom shader used for this task inside the _Shaders folder in the project.

Implement an InteractionDetector object

To integrate our IInteractable interface with the InteractState, using our existing logic, we will create an InteractorDetector object that will find the IInteractable objects. It will cast a sphere in front of the character in order to detect objects that are interactable based on the specified LayerMask. In our case it’s the “Interactive” layer that we have applied to our Switch object.


The green gizmo sphere shows that we have successfully detected the Switch object which is outlined in white (the blue sphere) by our outline system.

Here is the InteractionDetector code (full version can be found on Github):

public class InteractionDetector : MonoBehaviour
    {
        [SerializeField]
        private float m_detectionRange = 2.0f;
        [SerializeField]
        private float m_detectionRadius = 0.5f;
        [SerializeField]
        private float m_height = 1.0f;
        [SerializeField]
        private LayerMask m_detectionLayer;

        public IInteractable CurrentInteractable { get; private set; }

        private Highlight m_currentHighlight;


        public void DetectInteractable()
        {
            //Uses OverlapSphere cast to detect IInteractable objects
        }

        private void ClearCurrentInteractable()
        {
            // clears selection
        }

    }

The most important element of this script is the property CurrentInteractable { get; private set; } that will allow our other scripts to access the detected object.

We’ll make the DetectInteractable() method public so that we can call it from our Agent script (we’ll explore why later). We could also call it in the Update() method of this script but again, we will discuss soon why we might not want to do that.

The updated MoveInteractTransition class that will use InteractionDetector to only allow the transition to the InteractState if we are detecting something (script on Github):

public class MoveInteractTransition : IEventTransitionRule
    {
        private IAgentInteractInput m_interactInput;
        private InteractionDetector m_detector;
        private bool m_interactFlag;
        public Type NextState => typeof(InteractState);

        public MoveInteractTransition(IAgentInteractInput interactInput, InteractionDetector detector)
        {
            m_interactInput = interactInput;
            m_detector = detector;
        }

        public bool ShouldTransition(float deltaTime)
        {
            return m_interactFlag;
        }

        public void Subscribe()
        {
            m_interactInput.OnInteract += HandleInteraction;
        }

        private void HandleInteraction()
        {
            m_interactFlag = m_detector.CurrentInteractable != null;
        }

        public void Unsubscribe()
        {
            m_interactInput.OnInteract -= HandleInteraction;
        }
    }

When the OnInteract event is emitted the m_interactFlag is set to the detection result. This in turn will trigger the transition to a new state. Since our code creates a new MoveInteractTransition object each time we enter the MovementState we don’t need to worry about resetting the bool flag. It might not be the most efficient way in terms of our game’s performance.

The IEventTransitionRule interface is part of the improved State Pattern implementation from article 2:

public interface IEventTransitionRule : ITransitionRule
    {
        void Subscribe();
        void Unsubscribe();
    }

This will allow our state transitions to react to input in a form of events by automatically subscribing and unsubscribing to them. You can see the changes made to the State script.

Next we have attached the InteractionDetector to the player GameObject:


Our Player GameObject will now have an InteractionDetector script and the WeaponHelper script.

Make sure to set the Detection Layer property to Interactive, NPC for it to work correctly. We will later add a new interaction connected with our NPC.

Integrating InteractionDetector into our code

We will add a reference to the InteractionDetector inside the Agent script and we will call the DetectInteractable() method on the InteractionDetector in the Update() method of the Agent class.

public class Agent : MonoBehaviour
    {
        …

        private State m_currentState;

        private InteractionDetector m_interactDetector;

        private void Awake()
        {
            …
            m_interactDetector = GetComponent<InteractionDetector>();
        }

        …

        private void Update()
        {
            if (m_interactDetector != null)
                m_interactDetector.DetectInteractable();
            if (m_currentState != null)
                m_currentState.Update(Time.deltaTime);
        }

        …


    }

We want to ensure the execution order by calling DetectInteractable() from the InteractionDetector before the current state’s Update() method is called. This way we have the most recent data about the detected Interactable objects before we make a decision if we can transition to an InteractState behavior.

We could make the InteractionDetector have its own Update() method but in some cases where we don’t explicitly define the order of execution of our scripts we may end up with strange bugs that are very difficult to find that stems from an incorrect order of execution. It can happen when we rely on the Start() or Awake() method of one script being called before or after the same method in another script.

The reference to the InteractionDetector is passed as a parameter to my InteractState and included in the MoveInteractTransition script. This way we will play the Interact animation only when we are detecting some Interactable object while we provide Interact input. Please see the Github repository for a full version of the Agent script.

Here is the updated InteractState script:

public class InteractState : State
    {
        private AgentAnimations m_agentAnimations;
        private InteractionDetector m_interactionDetector;
        private float m_interactDelay = 0.3f;
        private bool m_interactionFinishedFlag = false;

        //other fields

        public InteractState(AgentAnimations agentAnimations, InteractionDetector interactionDetector)
        {
            m_agentAnimations = agentAnimations;
            m_interactionDetector = interactionDetector;
            m_delayTemp = m_slowDownDelay;
        }

        public override void Enter()
        {
           …
        }

        public override void Exit()
        {
            …
        }

        protected override void StateUpdate(float deltaTime)
        {
            //code applying delay to let the animation play a bit

            if (m_interactionFinishedFlag)
                return;


            if (m_interactDelay <= 0)
            {
                if (m_interactionDetector.CurrentInteractable != null){
                    m_interactionDetector.CurrentInteractable.Interact(m_interactionDetector.gameObject);
                }
                m_interactionFinishedFlag = true;
            }
            else
            {
                m_interactDelay -= deltaTime;
            }

        }
    }

We are getting the reference to the InteractionDetector as a constructor parameter and we use it to call the Interact() method after a small delay and only if we are detecting an object to interact with.

What is a constructor?

In C#, a constructor is a special method automatically called when an instance of a class is created. Its primary purpose is to initialize the object, often by accepting parameters to set initial values for its fields or properties. This approach, known as dependency injection, allows required objects or values to be passed into the class during its creation.

For example, in our InteractState class (shown above), the constructor is defined as a method with the same name as the class:

public InteractState() {}

Inside the parentheses, we pass references to AgentAnimations and InteractionDetector objects. These references are essential because the InteractState class needs access to them to execute its logic.

Read more about constructors on Wikipedia.

Notice that we are passing m_interactionDetector.gameObject as the GameObject to the InteractState . We do this because it represents the same object as our player Agent. If needed, you could pass a different GameObject as a parameter to the constructor to provide more flexibility and specificity.

Testing the system

To test the setup make sure to get close enough to the spherical Switch object to see the white outline around it and press the left mouse button. This should trigger the interaction logic.

image9
We can now press the left mouse button to interact with the Switch object.

If it doesn’t work for you, make sure to check the layer mask settings on your InteractorDetector. You can also open the scene from the “Article 3 → Result” folder to see the final result here

Implement pick up interaction

The next interaction is for picking up objects – for us it will be a staff that is a weapon. To keep things simple we will attach it to the player’s back rather than create animation for it. We just need to create a new WeaponPickUpInteraction class:

public class WeaponPickUpInteraction : MonoBehaviour, IInteractable
    {
        public void Interact(GameObject interactor)
        {
            WeaponHelper weaponHelper;
            if (weaponHelper = interactor.GetComponent<WeaponHelper>())
            {
                weaponHelper.ToggleWeapon(true);
            }
            Destroy(gameObject);
        }
    }

Here we will actually access an object on our GameObject by using GetComponent<>() method. As we have mentioned before it has some downsides – our new IInteractable object needs to assume the structure of how components are attached to the Interactor GameObject. On the other hand our IInteractable abstraction depends on a stable GameObject class. The class is implemented by Unity. There is very little chance that it will change significantly enough to break our code.

We’ll access a WeaponHelper script that just allows us to toggle on and off the staff attached to the back of the player’s GameObject.

We then add this component to the staff object that is already present to be detectable by our InteractorDetector.

image12
The result of adding WeaponPickUpInteraction into our project

We have successfully implemented an Interaction System in our project. Let’s explore how we can expand it with more interactions.

Implement NPC interaction

Let’s expand the project with an interaction wherein the NPC from article 1 stops and waves to the Player.

We already have a WaveState class and an IAgentWaveInput which will allow us to trigger the Wave state. It isn’t anything new if you have read through the second post in this series, so feel free to browse the code on Github. The animations are implemented as a separate Animation Layer with an Avatar Mask applied so that they affect only the upper part of the body of the character.


A new layer was added to our character Animator in order to handle the Wave and Interact animations. The Upper Body AvatarMask will make only the upper part of the character body affected by our new animations.

Here is the new IAgentWaveInput interface (following the interface segregation principle covered in Article 2):

public interface IAgentWaveInput
    {
        event Action OnWaveInput;
    }

It contains a single event OnWaveInput that we need to trigger to make a character in our game stop and play the waving animation – in other words enter the WaveState. Now we are going to create a new NPCInteractable script that will trigger this event:

public class NPCInteractable : MonoBehaviour, IInteractable, IAgentWaveInput

    {
        public event Action OnWaveInput;

        public void Interact(GameObject interactor)
        {
            OnWaveInput?.Invoke();
        }

    }

As you can see when we want to call the Interact() method we will just invoke the OnWaveInput event.


Our NPC GameObject will now have an NPCInteractable script on it and a trigger Capsule Collider component. It is also assigned to the NPC layer which our InteractionDetector can detect.

We will add it to our NPC making sure that it also has a Capsule Collider and that it is on an NPC Layer so that our InteractionDetector can detect it. When we walk to our NPC and try interacting with it we now get a nice animation:

image13
The result of adding an NPCInteractable script to our NPC GameObject

All we had to do to implement this was to extend our interaction system with a new class and add a new animation and a new state to our state pattern. This is much easier than modifying an Update() method in our AgentMonolithic state from article 2.

Adding IActiveInteractable

One extra issue that we want to address is the situation where we want to give an interaction a delay. We can make our NPC do the wave animation but in order not to break the immersion we don’t want it to wave constantly and then to never move again. The solution for that is simply to introduce a “cool down timer”, letting it wait three seconds before we can interact with it again.

We can add a new interface called IActiveInteractable:

public interface IActiveInteractable : IInteractable
    {
        bool IsInteractionActive { get; }
    }

Here IActiveInteractable has a single property IsInteractionActive {get;} that we can use to detect if the otherwise IInteractable object can be interacted with at this moment. By inheriting from IInteractable, we enforce the requirement that all IActiveInteractable objects must implement the interaction behavior defined in IInteractable. It wouldn’t make sense to have an IActiveInteractable component on an object that isn’t interactable in the first place.

This approach provides several benefits:

  • It ensures that any active interactable object is inherently interactable, maintaining a clear and understandable design.
  • Existing logic that works with IInteractable can seamlessly work with IActiveInteractable, making the code easier to reuse and reducing redundancy.
  • Following the open-closed principle, we can extend the functionality of interactable objects without modifying existing interfaces, making our codebase more maintainable.

Adding the new interface to NPCInteractable script

To implement the logic we will add a timer to the NPCInteractable script and give it a delay (m_delayBetweenInteractions) before the interaction option becomes active again (IsInteractionActive).

public class NPCInteractable : MonoBehaviour, IActiveInteractable, IAgentWaveInput
    {
        [SerializeField]
        private float m_delayBetweenInteractions = 3;
        private float m_currentDelay = 0;

        public bool IsInteractionActive { get; private set; } = true;
        public event Action OnWaveInput;

        public void Interact(GameObject interactor)
        {
            if (IsInteractionActive == false)
            {
                return;
            }

            OnWaveInput?.Invoke();
            m_currentDelay = m_delayBetweenInteractions;
            IsInteractionActive = false;
        }

        private void Update()
        {
            if (IsInteractionActive == false)
            {
                if (m_currentDelay > 0)
                {
                    m_currentDelay -= Time.deltaTime;
                    return;
                }
                IsInteractionActive = true;
            }
        }
    }

Here we have implemented a delay of three seconds before we can repeat NPC Interaction.

Modifying the InteractionDetector class

To integrate our new IActiveInteractable interface, we need to make some changes to the existing InteractionDetector script. These changes will allow the detector to check whether an interactable object is currently active. In order for it to adhere to the open-closed principle, we can update the InteractionDetector class like this (full script on Github):

public class InteractionDetector : MonoBehaviour
    {
        // other fields

        public IInteractable CurrentInteractable { get; private set; }

        private void Update()
        {
             Collider[] result = Physics.OverlapSphere(...);
            if (result.Length > 0)
            {
                IInteractable interactable = result[0].GetComponent<IInteractable>();
                Highlight highlight = result[0].GetComponent<Highlight>();

                if (interactable is IActiveInteractable activeInteractable && activeInteractable.IsInteractionActive == false)
                {
                    ClearCurrentInteractable();
                    return;
                }

                if (interactable != CurrentInteractable)
                {
                    …
                }
            }
            else
            {
                ClearCurrentInteractable();
            }


        }

        private void DetectInteractable()
        {
            …
        }

        private void ClearCurrentInteractable()
        {
            …
        }

    }

This change ensures that inactive interactable objects are not highlighted or interacted with, enhancing the realism and immersion of the game.

The result is that the outline is gone after we trigger the interaction and that we can’t invoke it again before the delay expires.

image2
The result of implementing the IActiveInteractable interface to our NPCInteractable object

Conclusion

In this article, we have explored the power of interfaces to create a modular and maintainable interaction system in Unity. By defining an abstract IInteractable interface, we were able to separate the interaction logic from the specific implementations. That way we were able to add new types of interactions seamlessly.

The use of interfaces promotes code reusability and scalability but it also enhances the flexibility of our system. By adhering to object-oriented design principles such as the open-closed principle and the interface segregation principle, we ensure that our codebase remains clean, modular, and is easy to extend. You can get the full project at our Github

If we wanted to improve the design further we could consider using a chain-of-responsibility pattern or again the strategy pattern to remove the need to modify the script in the future. If you like to learn more about the chain-of-responsibility pattern check out this article on Wikipedia.

In the next article we will delve deeper into object-oriented design patterns and explore advanced techniques for creating flexible and maintainable game systems using composition.

5 Likes