Getting the Detectable Tags from ray perception sensor script

I have an enemy tag in the RPS script and i want to do an if statement inside my agent script to say if the ray cast hits an object with the tag enemy then do something. I am not sure however how to actually retrieve the detectable tag info from the RPS script. Any help would be great as I am fairly new to this.

1 Like

Sorry for the late response on this. The output for the RayPerception sensor isn’t designed to be usable outside ML-Agents code. It writes the results directly as an array of floats that get sent to training or inference.

It’s still possible to get something out of the results though. This part of the code explains some of the format:

So as an example, if you had 3 rays with 2 detectable objects, there would be 12 output floats (num rays * (num tags + 2)). The output would look like:

output[0] = 1.0 if ray 0 hit tag 0, else 0.0
output[1] = 1.0 if ray 0 hit tag 1, else 0.0
output[2] = 1.0 if ray 0 didn’t hit anything, else 0.0
output[3] = hit fraction of ray 0 (or 1.0 if it missed everything)

output[4] = 1.0 if ray 1 hit tag 0, else 0.0
output[5] = 1.0 if ray 1 hit tag 1, else 0.0
output[6] = 1.0 if ray 1 didn’t hit anything, else 0.0
output[7] = hit fraction of ray 1 (or 1.0 if it missed everything)

output[8] = 1.0 if ray 2 hit tag 0, else 0.0
output[9] = 1.0 if ray 2 hit tag 1, else 0.0
output[10] = 1.0 if ray 2 didn’t hit anything, else 0.0
output[11] = hit fraction of ray 2 (or 1.0 if it missed everything)

The “hit fraction” of the ray is how far from the ray start to the end before something was hit. So if you’re ray length was 10 units and you hit something 7.5 units away, the hit fraction would be 7.5 / 10 = .75.

Hope that helps…

4 Likes

@celion_unity just wanted to know if the RayPerception sensor has 2 sensors and the environment scene has a total of 5 tags so then the output array length would be 2 * (5 + 2) = 14.
Will the input to the Neural Network from this RayPerception would be of 14 floats?

That seems to be redundant as the values 14 can be normalized to range of 0-1 with each tag getting its share
For example: if the output array is as follows with 2 tags
output[0] = 1.0 if ray 0 hit tag 0, else 0.0
output[1] = 1.0 if ray 0 hit tag 1, else 0.0
output[2] = 1.0 if ray 0 didn’t hit anything, else 0.0
output[3] = hit fraction of ray 0 (or 1.0 if it missed everything) then
outputValueforNeuralNetwork = range(0, 0.33) if ray 0 hit tag 0 and range tells us the hit fraction.
range(0.33, 0.66) if ray 0 hit tag 1
range(0.66, 1) if ray 0 didn’t hit anything

Will this approach work?

Also if I add new or remove some tags later on my game development will the trained model work still?

Thank you for the suggestion.

It is actually much easier for a neural network to learn using the signals as they’ve been designed since they can act essentially as ‘on/off’ switches for activations in the network. Your suggestion may work, but I’m not sure it would be efficient.

If you add and remove tags you change the input to the network which would cause the trained model to be obsolete/not applicable.

1 Like

hey @andrewcoh_unity thanks for your reply.

If adding a tag/removing a tag makes the trained model obsolete wouldn’t be nice to add a feature like where the ray perception would consider only certain tags as per the user configuration and all the remaining tags are considered only one tag (say a default tag). This way the model won’t break even if the new tags are added to the scene as they will fall into the default tag and won’t change the input size of the network.

There is a default for the raycasts. Each ray will trigger a flag for each specified tag plus one default flag for a collision with some object that’s not part of the specified list of tags.

1 Like

Yes that is true but my point is by adding new tag it would be nice if that doesn’t make the current trained model obselete.

lets say my game scene consists of 5 tags total and my Neural network model only cares about 2 tags and the remaining 3 tags it doesn’t care which is which. So ray perception needs to consider total 3 tags (2 tags which are important for neural network to run and another last tag which represents all other three tags which in the game which aren’t important for the Neural network to run).
By this approach if we were to add new tag in the later development of the game then that new tag falls into the category of last tag in Ray Perception and hence it doesn’t change the total input size of the Neural Network.

I am just asking whether this could be possible or it will fail. Just wanted to learn more about the ray perception. Sorry If my question seems silly.

1 Like