Modular AI system concept, how best to optimally implement?

I have an idea for a modular AI system. The main goal is to allow this system to run anything from simple AI (fish) to advanced AI (villagers and combat squads). The second goal is to allow randomized diversity so that each instance of an AI is unique to some degree. The third goal is to allow for users to edit a base template to extend or implement new AI features. If you want a more detailed concept brief you can find a viewable PDF on my Google Drive here. I plan on using a custom editor for AI assembly. Here’s the AI Creator work flow:

  • Create Species, includes group variables like species name and update frequency.
  • Create Sensors, functions incorporating colliders, raycasts and parameters to understand surroundings.
  • Create Mind, specify decision-making assessments and variables.
  • Create Body, corresponding micro tasks (position, animations, etc)
  • Create Tasks, sequential lists of micro tasks to perform.

A new AI species is given a name to distinguish itself from other AI species. A specie’s update parameter determines how frequently all AI instances of that species type will update. Sensors aim to mimic vision, scent, and sound. Each Sensor type has an unique process of detection. When a sensor properly detects, it notifies the Mind. The Mind accounts for detections, individual stats, and memory while making choices. The Mind follows a core sequential checklist:

  • Individual Assessment

  • assess detectable threats, assign priority to each threat based on variable thresholds

  • assess personal well being, assign priority to each stat based on variable thresholds

  • set desired task based on threat and/or variable priority comparisons

  • Actuation

  • Continue current task, or change current task based on desired task. Run task.

Some AI are more intelligent. By adding more core sequences to the Mind, the AI species gains more capabilities and assessment power. New capabilities come in the form of available tasks and alternative checklists. Basic tasks handle basic life functions such as: eating, sleeping, moving, fight, flight, and search. More intelligent tasks include: hunting, fishing, mining, building, and so on. Furthermore, some AI can be part of an active group, such as a clan, squad, village, and culture. Most groups provide more statistical modifiers while assessing, while others take control of the AI’s Individual Assessment and operate on different core sequences.

  • Squad Assessment – if not in squad, perform the next core sequence.
  • Individual Assessment – if not in serious danger, perform the next core sequence
  • Group Assessment – if group has no needs, perform next core sequence
  • Task Assessment – compare all priority assessments from previous core sequences, set current task
  • Actuation – continue or initiate current task.

The Body handles all the animations and movement. It contains most micro tasks. A micro task is a function that handles a very explicit action, such as: animations, look at, move to, and so on. The Body is used primarily by the Task script. The Task script is given a specific task to perform. Each task is a set of micro actions to be activated sequentially governed by smaller assessments.

When a new AI is complete, the AI Creator generates a new Prefab with all the game objects and scripts needed. The user then takes the species prefab and merges it with the desired creature prefab (model, animator, camera, etc), the component connections are corrected, and when finished the user saves the merged result as a new prefab. The latest prefab can then be instantiated whenever, where ever. An AI manager tracks all species and associated instantiations and updates them accordingly.

I am not sure what the best way of coding this system would be. I’ve looked into enums, switches, coroutines, overloading, delegates, Generics, namespaces, and polymorphism.

What combination of structures would work best? I’m focused on optimization rather than simplicity.

I would run with Unity’s component based system. Components work well for this type of layered approach to AI. (I also think you will need more layers, but that’s another discussion.)

Have a base component for each type of layer. Use RequiresComponent to define the dependencies between components. You can then inherit from each component as needed. Or set up custom inspectors for each component and configure everything in the inspector.

All of these things would be needed. But if you are asking about them it may be you have bitten off more then you can chew. Kind of like painting the Sistine chapel used red and blue and green paint. Its a start, but its a long, long way from giving you the stuff you need for a master piece. AI is not a topic for the faint hearted.

My code experience originates with web development, Processing, and a bit of Arduino. I have played with Unity for years and have written systems for simple enemies, day cycle, weather, and inventory. Some of the deeper C# capabilities are still a bit foreign to me. I have a solid grasp on enums, switches, coroutines, and overloading. The rest I have never attempted, though the basic concepts are understandable. Something I failed to mention earlier: If this AI system works, I want it to be available free for anyone. I’ve learned a lot by tinkering with free code. I feel it is time to start giving back. My previous systems are in the process of being polished before becoming public.

I’ve been thinking about this idea. Would it be possible to filter through sibling components? Perhaps filtering by some sort of component tag, so that the base component can log what components are relevant at the start? Or would it be better to go the other way around, where the components subscribe or inject themselves into the base component? Could a filter method be based on component names, such as a prefix or suffix? For determining sequence, could each component have a specific variable that the Base uses to arrange a list of operation? Not sure if this is as flexible or optimized as I’m aiming for.

Here’s what I’m thinking now:

  • Main AI Component, requires Base Components

  • Base Components, searches for and logs explicit components fitting a name prefix parameter.

  • explicit components have specialized logic. Boolean return type for top level evaluations.

Updates are triggered by the Main AI, which calls on the Bases, which in turn call on explicit components in order. Explicit components are sensors, assessment methods, actuation, etc. Thoughts?

I probably have condensed the base components too much. What do you suggest for base components?

Thank you for the feedback!

1 Like

Bear in mind I am no AI designer, not by a long shot. The most complex AI’s I’ve personally built are for space invaders, animal simulations and zombie shooters. But the layer structure I typically use is something like this

  • High level goal planning
  • Low level goal planning
  • Steering
  • Motor
  • (For a more complex simulation I would work with sensors and memory components as well. I haven’t needed this yet, so I won’t go into it).

High level goal planning determines very high level objectives and chains them together. I am wounded. Therefore I shall find cover. Then I will retreat to cover. Then I will use a health pack. Then I will return to combat.

Low level goal planning takes each of these tasks and breaks it down. Tasks like “Retreat to cover” become actual paths to follow.

Steering deals primarily with very short term navigation. How do I get between my current position on the path and the next position, without falling off the edge of a cliff or colliding with another character. The output from the steering layer is normally a direction heading.

The motor level takes the direction heading and converts that into the actual force to move the character.

You can do this pretty easily if they share a common base type or a common interface. GetComponent or GetComponents works for this.

For more sophisticated approaches you can use reflection.

Awsome! Love to see how you go about this. In regards to "Some of the deeper C# capabilities " Here is a link that explains “Quote” “This article is one of the first sources (with complete and clear comparison) about data structure, Array, ArrayList, List, IList, ICollection, Stack, Queue, HashTable, Dictionary, IQueryable, IEnumerable.” Hope it helps, It may not but i wish you good luck!

This is sort of how I have my AI system designed:

As you can see in the project folder for it I have various sensors, and I have a structure to define a behaviour tree (similar to RAIN’s behaviour tree, but with states added… which really are just a way to swap out the current tree quickly).

Here you can see it on one of my AI units:

Thank you for the examples. After reviewing it I ended up using a similar method with some inspiration from Unity’s Stealth videos to achieve sensory input. I’m currently doing prototype trials with a primitive AI set of scripts. There’s a bug with Unity’s OnTriggerStay function. In my experience as of late the function simply is never called. I’ve figured out a work around by storing game objects in a list with on trigger enter and exit, which seem unaffected by the bug.

The attached images show the primitive AI’s hierarchy and exposed variables. Here are the scripts as of now.
Note: I had the motor connected directly to the sensory, but now nothing drives the motor as I am working in the mind and eventually the task scripts. I’m clearing and gathering a list 5 times a second. It works fine for one AI entity, but will this be a good idea when there’s 100? 50?

using UnityEngine;
using System.Collections;
using System.Collections.Generic;


public class PrimitiveStats : MonoBehaviour {

    private float maxhealth = 50.0f; // max state
    public float health = 50.0f; // current state

    public float maxHunger = 50.0f; // max state
    public float hunger = 50.0f; // current state
  
    public float maxStamina = 50.0f; // max state, will not exceed hunger later on
    private float stamina = 50.0f; // current state

    public float endurance = 1.0f; // stamina consumption rate during exertion.
    public float strength = 1.0f; // base of attack damage
    public float speed = 3.0f; // base for motor movement speed
    public float dexerity = 5.0f; // base for motor turn speed
    public float reach = 1.5f; // base for motor distance

    public float persistanceInSeconds = 10.0f; // when a target is lost, this is the timer for searching, running, etc.

    public List<string> PrayTags = new List<string> ();        // things that it hunts
    public List<string> PreditorTags = new List<string> (); // things that hunt it

    // function to change health (reduction, gain)
    // function to change hunger (reduction, gain)
    // function to change stamina (reduction, regeneration, consumption, etc)
    // function to handle death, alert mind for apropriate response.
  
}
using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class PrimitiveMind : MonoBehaviour {

    private PrimitiveSensor Sensor; //cashe
    private PrimitiveMotor Motor; //cashe
    private PrimitiveStats Stats; //cashe

    private List<GameObject> visibles = new List<GameObject>(); // visible targets from Sensor

    void Start () {
        Sensor = GetComponentInChildren<PrimitiveSensor> ();
        Motor = GetComponent<PrimitiveMotor> ();
        Stats = GetComponent<PrimitiveStats> ();
    }
  
    void Update () {
  
    }

    public void SensorInput(List<GameObject> vTargets)
    {
        visibles = vTargets;
    }

    // FUNCTION: Assign a threat level to each target (what is it? do I know it? Is it aggravated?)
    // FUNCTION: Assign a threat level to each stat (how close to starving are we? Am I near death?)
    // FUNCTION: Assess threat levels (I need that food over there, but that other AI will eat me. Should I risk a nibble?)
    // FUNCTION: Assign Task (I'm about to starve, I'll give it a shot)

}
using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class PrimitiveSensor : MonoBehaviour {

    public float updatesPerSecond;

    private GameObject body; //parent with collider, rigidbody, other scripts
    private SphereCollider sensorCollider;

    public List<string> tags = new List<string>(); //tags to be wary of. Will later be based on stat's preditor tag list
    private List<GameObject> targets = new List<GameObject>(); //every gameobject that passed the tags test
    private List<GameObject> visibles = new List<GameObject>(); //what's in sight
    private List<GameObject> audibles = new List<GameObject>(); //coming later, unsure how to handle sound since it can pass through and be blocked by mass
    public bool targetInSight = false; //for a single target, indicates when any target amount is visible

    private PrimitiveMind mind; //cashe

    [Range(0.0F,360.0F)]
    public float fieldOfView = 110.0F;
  
    void Awake()
    {
        body = transform.parent.gameObject;
        sensorCollider = GetComponent<SphereCollider>();
        mind = body.GetComponent<PrimitiveMind> ();
    }

    void Start(){
        StartCoroutine ("SensorUpdate");
    }

    void OnTriggerEnter(Collider col)
    {
        foreach (var tag in tags) {
            if(col.tag == tag && !targets.Contains(col.gameObject))
            {
                targets.Add(col.gameObject);
                Debug.Log(col.tag + " entered");
            }
        }

    }

    void OnTriggerExit(Collider col)
    {

        foreach (var tag in tags) {
            if(col.tag == tag && targets.Contains(col.gameObject))
            {
                targets.Remove(col.gameObject);
                Debug.Log(col.tag + " left");
            }
        }
    }

    IEnumerator SensorUpdate()
    {
        while (true) {
          
      
            if (targets.Count > 0) {
                foreach (var target in targets) {

                    targetInSight = false;
                  
                    // Vision Check
                                Vector3 direction = target.transform.position - transform.position;
                                float angle = Vector3.Angle (direction, transform.forward);


                                if (angle < fieldOfView / 2)
                                {
                                    RaycastHit hit;
                                    if (Physics.Raycast (transform.position, direction.normalized, out hit, sensorCollider.radius)) {
                                        if (hit.transform.tag == target.tag) {
                                            Debug.DrawLine (target.transform.position, transform.position);
                                            if (!visibles.Contains (target)) {
                                                visibles.Add (target);
                                                mind.SensorInput(visibles);
                                            }
                                        }else {
                                            visibles.Remove (target);
                                          
                                        }
                                    }
                                }

                    // Hearing Check goes here
                    // Scent Check goes here, if added later
                }
            }
            visibles.Clear ();
            yield return new WaitForSeconds(1/updatesPerSecond);
        }
    }
}
using UnityEngine;
using System.Collections;

public class PrimitiveMotor : MonoBehaviour {

    private PrimitiveStats stats;
    private float moveSpeed;
    private float turnSpeed;
    private float stopRange;

    // Use this for initialization
    void Start () {
        stats = GetComponent<PrimitiveStats> ();
        moveSpeed = stats.speed;
        turnSpeed = stats.dexerity;
        stopRange = stats.reach;
    }


    /* parts are from a different Unity Forum, don't remember which */
    void Move(Transform target, float dist){

        //Rotation
        Quaternion neededRotation = Quaternion.LookRotation(target.position - transform.position);
      
        //use spherical interpollation over time
        transform.rotation = Quaternion.Slerp(transform.rotation, neededRotation, Time.deltaTime * turnSpeed);

        //stopping at a distance so it doesn't constantly run into a target
        if (dist >= stopRange) {
            transform.position += transform.forward * moveSpeed * Time.deltaTime;
        }
    }
}

EDIT: I just found this. I’ll take a crack at testing this concept in a few days.
http://obviam.net/index.php/game-ai-an-introduction-to-behavior-trees/

I don’t know whether this will help, but its a good tutorial i stumbled upon for AI battle behavior.

My free time is extremely limited. Please forgive me for long stretches of inactivity.

I’ve found this behavior tree system on GitHub. I understand the concepts at a macro level, but not so much on a micro level. What exactly is the code doing? How does one implement this style of BT in a manner to mimic living behavior? How flexible is this? How costly is this method of AI implementation?

On another note, I think I have a decision making concept. Each AI have variables that represent threats. For example: hungerLevel, healthLevel, threatLevel, etc. These values would be determined by passing a collective value (current health / max health) input into an equation. The equations (example: output = 3(input)^2+1.2) act as risk assessment. All risk assessment outputs (ranged 0-1) are compared, the highest becomes the problem to address first.

assessments → risk evaluation → comparison → perform actions respective to the comparison results

examples of assessments:
starvation assessment = current energy / max energy
predator assessment = for each threat in threats, current threat type score * threat aggression score * (1/distance)

A threat type would be a simple arbitrary value that represents how dangerous the AI in question is to this AI. This hopefully allows for AI creatures to be more afraid of one type of creature than another.

risks to be based on inputing an assessment into a respective curve field.

I believe a simple switch based on an enum value output from the comparison of risks could drive the actions.

I believe you’ve just described fuzzy logic. :wink:

Before discovering this thread I recently released an asset of the same name on the asset store: Unity Asset Store - The Best Assets for Game Making

At the time of writing it is nearing the end of beta and is 20% off. It is designed to allow the user to create AI of any complexity in the form of modules you can create as integrations or add-ons and sell on the asset store or use in your own projects.

The next update will include a plethora of integrations, as well as an improved entity system and tutorials.

Edit: as a more technical note it uses Utility Theory to score and execute actions that are defined by modules in the form of actions and conditions created with a 2 step API.