Okay, I’ve found how to render to a texture, how to render a camera’s depth map and how to render a cubemap.
Now, I want one camera to be constantly rendering a depth cubemap texture. In other words, I want a map of distances of all points in all directions around my camera.
I was ready to make a shader, but I found out about all of these and am wondering if/how I can combine these to create a texture that I can use every frame to check distances.
Does anyone have any insight that they could provide?
I am thinking of this approach to find the nearest point to the camera every frame…
Thanks!