I understand that the data perceived by a Ray Perception Sensor 3D component is fed directly into the neural net. I would like to understand better the granularity of this data, although the developer does not directly access this data. This may give some insight as to whether there is sufficient data for training in a given situation, although of course such insights may not be intuitive.
Is the perception data merely “something was detected” or is there more granular information such as which ray did the detecting and what tag was detected? I have attempted to find the answer in the documentation and online but have not been able to find the answer. Apologies if it is in plain sight somewhere and I missed it.
Thanks for any clarification!