Need help understanding Ray Perception Sensors

Hey, I am recently working with ML Agents and I would like to better understand how the Ray Perception Sensor works. More specifically, I am currently training an agent to go after some targets with these rays. When he touches them, they get a blue outline and they are supposed to be less important after. The important factor is this: if the agent collides with targets he gets rewarded and if he collides with the same targets but they have the blue outline (each one of these targets has a boolean that represents if they have this outline or not), he gets a negative reward.
Can the ray sensors make this distinction (and get this boolean from the targets) or do they just make distinction based on the tags of the objects they hit?

Hello @joaogomes1298 ,

RayPerceptionSensors are similar to Physics.Raycast, they are thrown out is desired direction and when they hit object colliders they recognize these objects. What to do after recognition totally depends on your code. imagine them to be your hands when you are playing the kids game where your eyes are tied and you have to catch your friends running around. Your hand’s surf around trying to hit a person. you can find your friends even without the hands, but when you spread your hands it makes it easier to reach out. That’s Ray perception Sensors. Just in this case on collision they give you a lot of details about the object that they hit which might be used for other parts of your codes.
Hope this helped. Sorry for being childish :P.