I know this might sound silly, but I want to basically grab the stereoscopic textures from left and right eye cameras, process them as 2d textures, and then replace the camera view with these 2d textures (fullscreen). I’m using a Meta Quest 3 and I have the all-in-one SDK installed with URP.
WHAT I NEED HELP WITH:
Specifically, what I want to do is the following:
- Grab the rendered textures from the left eye and right eye cameras
- Process these textures as pngs via python (e.g., turn them into black and white, run some image classification with opencv, etc)
- Then I want to take the processed pngs and set them back to left and right eye cameras.
I’m not asking for a post-processing stack because I want to send these textures to python and then back into unity. I’ve already figured out how to receive and send images back and forth between Unity and Python.
WHAT THE TROUBLE IS
What I’m having trouble with is:
- figuring out how to get the left and right eye render textures to keep their fidelity to what the user sees from VR
- how to display them back as the replaced textures to the left and right eye
WHAT I THINK IS INVOLVED IN THE SOLUTION
As far as I understand… from hearing advice from others is that I might need to at a render feature that uses blit to replace the VR camera view with the new render texture. Alternatively, maybe some sort of projection mapping onto meshes: If I know the orientation of the headset, then I can just project the modified 2d textures back onto the meshes at every update?
I can’t seem to replace the render textures of the left and right eye and I don’t know how to extract them with the proper distortions (FOV, etc).
I think an easy way to think about what I want is this is: I just want to take real-time pictures of what the left and right eye see in a VR scene and then through code replace them 1-to-1.
Any help would be appreciated!
Thanks a million!
EDIT: modified text to make issue more clear