I want to give visual feedback about the gameobject with the position in the center of the screen. Really, not the 3d shape / mesh / collider, but only gameobject’s transform.position near the screen center. (A bit strange, I know.)
I know I could enumerate the entire scene, exclude everything outside a small “screen center frustrum” (or cone) and then use the one nearest the camera.
I’m just wondering if there’s a more efficient approach. The raycast function seem to depend on the collider only. Is there any way to let them use the position only instead?
I assume you’re trying to detect whatever object is in the center of the screen?
If you want to avoid iterating the whole scene (with your cone approach) and raycasting, you could use an additional camera that is connected to a small RenderTexture. Give the camera a narrow FOV and ensure it’s pointing in the same direction as your main camera. During the additional render pass you would want to ensure all objects use a different shader, one that renders a solid color encoded with object ID information. To recover that ID information you would use Texture2D.GetPixel() on the center pixel coordinate of the RenderTexture.
I’m only caring about the point at each 3d object position. Not the mesh. Not the collider.
Basically like deleting all attached meshes in the entire scene and replacing them with small default spheres and then doing a raycast from the screen center to see what it hits. But that would heavily modify the scene and probably cause side-effects, so I’m looking for a simpler solution.
Thanks, but they start with all objects, too and then calculate the visibility manually. I ended up using the mentioned brute-force-algorithm and it works pretty good. Much faster than I expected.
You probably don’t even need to do something like this every frame either. For this I would do it a few times a second, thus reducing the overhead by 95%.