Passing rendered frames to a machine learning algorithm at runtime

I’m interested in doing a small research project. Namely I want to basically pass image frames containing information like camera depth, world space normals, and some stencil shaded semantic information into a machine learning algorithm. The goal is to create an algorithm that can use this information for enhancing renders.

Before proceeding I need two bits of information – what’s the best way to write the frames to pngs so I can use each frame for training?

Secondarily, since neural networks are generally a set of matrix multiplication operations, would it be possible to implement my algorithm as a shader?

Ahh, looks like there is a new package to do exactly this task:
https://docs.unity3d.com/Packages/com.unity.barracuda@1.0/manual/TensorHandling.html

I’m interested in implementing cGAN with Unity + Pytorch…