Some noob questions

Im experimenting with giving eveolving creatures ai using ML agents. i have a basic system they can generate modular creatures out of preset body parts in 2d and can even loosley simulate eveolution with pseudo machine learning, by simply having creatures with poor body configurations live shorter lives and reproduce less.

But im having trouble combining that with ml agents and am unsure of the possible scope. i’ve peeled it bakc to a single static creature that needs to collect randomly spawned food to experiment with MLA.

What information exactly does observations take from different types. I know it takes floats from base types like ints, vectors, transforms and the like. But what about a game object? is it just grabbing the transforms numbers from it? or is taking labels and component data too etc…?

do you of any resources on handling dynamic behaviours? for example where the agent may have different actions available to them at different times (eg… switching weapons, or going blind) or even a variable amount of allies/enemies/ food objects. or in my case instances where it may not have eyes or its mouth might be in a different spot of it’s body, extra fins etc…

is the grid sensor a good way to detect and differentiate hazards,buffs and enemies? Similar to the food collector example? im studying the gridsensorcomponent script and dont really understand how it works. what information is it collecting?
GitHub - mbaske/grid-sensor: Grid Sensor Components for Unity ML-Agents i just found this while writing, so it might answer this question.

How does the agent contextualize observations. if it’s just fed 3 out of context floats, how does it eventually comprehend that that is it’s coordinates? further if you feed it a list transforms for food pellets to collect, does it just eventually work out what each is through iteration and ML? or do i need to be assigning context to these besides rewards?

GitHub - mbaske/grid-sensor: Grid Sensor Components for Unity ML-Agents seems to be woefully out of date. the compnents it references no longer seem to exist. the changloge at the top even states that 2d is no longer supported.

It is a difficult problem you want to solve. Try to step back and make the simplest possible projects first. like collect Green food but avoid red ones. this is already complicated enough. Mlagents can be extremely frustrating to make even simplest things work, but it can make things work otherwise you couldnt imagine to solve on your own (like the Walker example)
just ask chatGPT for the other questions it will give you a better asnwer than I can

yes of course i started small. but i cant go further without knowing these things. it’s not really something you can learn through trial and error (somewhat ironically).

The things you ask are very basic and can be deduced pretty simply, that is why I sayed start with easier things first. E.g. just peeking into the VectorSensor Class implementation you can see that there is no overload for AddObservation with Gameobjects.
now the non-toxic part of my answer:

You could use AddOneHotObservation(int observation, int range) to add a type (e.g. with an enum.)

dynamic behaviours: there is nothing prebuild. you need to know the max number of e.g. enemies you will have in advance, and then use a bool if the enemy is activated or not. Also it helps to sort them by e.g. distance if you want to e.g. avoid hitting them.

the built in gridsensor is great, but makes trainer much harder and is bad in performance

How does the agent contextualize observations: it just learns to map from the input space (obsevations to the action space) to maximally optimize the reward function. IDK how it works in detail.
import for you is that it does it automatically; you just give it the information it could need.

are you sure? i have infact added game objects as observations without any obvious issues. the agent seemed to atleast grab the transform info from it. But i cant find any docs on what info it takes from each type.
If you observe a game object, does it also obser it’s tag? rigibodies?, colliders? what about children? is it only the transform? etc… In my own testing i could only reliably conclude it gets the transform data. Im guessing it atleast gets the tag and collider too, give how the raycast observer works by colliding raycasts after all.

i couldnt get my head around the onehot observations. do they just return an observation once, rather than continuously?

Yeh i didn’t expect it to have built in systems for an evolutionary simulation or anything obviously. But the dynamic behaviours is a significant weakness. Even in less complex scenarios, like a human that might sometimes have a bow instead of a sword or be facing a wolf instead of anoher human. I guess i just need to give the tech another decade to catch up with my whims lol.

“i have infact added game objects as observations without any obvious issues.” I would like to see that line of code, I suspect you are using the transform of the object as the observation, I am not aware of any NN that can take a generic object as input and magically collect all the information.

Someone else had a similar desire for the idea of Agent Types & Actions and if I was going to try something like it, this is how I would try it.

Also, not sure how you mean facing a wolf vs. a human. You would train the agent that hitting a human is bad, hitting wolf is good and let it sort it out.

I suggest doing several of the tutorials to get a better idea of how the agents work and learn imho.

 public override void CollectObservations(VectorSensor sensor)
    {
        sensor.AddObservation(gameObject);
        sensor.AddObservation(creature.hunger);//floats
        sensor.AddObservation(creature.hungerRate);
        sensor.AddObservation(creature.maxHunger);
        sensor.AddObservation(creature.lifeTime);
        foreach(Mouth mouth in creature.mouths) { //custom object type
            sensor.AddObservation(mouth.gameObject);
        }

i do agree it’s unlikely it’s collecting all the info i suggested, but the issue is i need to know exactly what info it does collect.

Asf or the facing a wolf vs human etc… I mean teaching it to combat a wolf, would be somewhat different behaviour to facing a human. A wolf wont have a shield or ranged weapons for example. What a about a dragon? it’s going to be massive and potentially have AOEs the other 2 examples don’t etc…
While logically in traditional AI you would group this all under combat generically. But in ML agents it’s probably better to use entirley different brains and switch between them manually when detecting that type of enemy. Which can be problematic with multiple different typoes of enemies in the same encounter.
Thats just enemy examples. What about handling the player switching between different weapons? a bow is gonna require every different ebhaviour to a spear. What if the player is injured and can only use one hand etc…
This sort of dynamic complexity is what i mean and is just basic stuff you might encounter in any rpg or action game. my project would add multitudes of complexity ontop of that with modular body parts, environs and stats etc…

ive done alot of the tutorials, including the humming bird. they dont answer any of these questions any more than the docs.