I’m looking for a 3D graphics guru to advise me if something is actually possible, and if so point me in the right direction.
I would like to be able to sample a pixel on screen and then get that pixels depth value from the Depth / G-Buffer. I need this to be accessed by C# script a number of times each frame.
I want this value so I can compare the pixel’s depth to a gameObject’s distance to camera to detect if the the object is occluded.
I’m currently using a method that uses physics raycasts back to the camera which works great but is limited to having physics objects for all occluding objects. Physics objects also fall down when using a transparent object e.g. leaves in an alpha cut out shader, also deforming meshes become problematic.
I understand that I will probably have to copy the depth buffer over to the CPU to access the pixel data, but what is the most performant way to do this while still preserving the high precision depth data? Or is this even the right approach?
You are better off doing the raycasts. Reading back even just a few pixels will stall the entire execution for at the very least several milliseconds, worst case tens of milliseconds. Reading back from the GPU is generally avoided at all costs, and if it can’t be avoided then it is usually done asynchronously over 2 frames, and I’m not sure that can be done without a native plugin.