Preprocessing CameraSensor image before input into CNN

Hi Barracuda team,

I have successfully imported a Unet model in onnx format into Unity. How do I pass the visual observation from CameraSensor into this model, before sending the output to mlagents Python API for training and also render the output onto the CameraSensor display?

I am trying OnRenderImage to display the output but I am getting an error??? The error happens at worker.Execute(tensor);
7436345--911261--upload_2021-8-21_16-20-13.png

Thanks!

Hi, it might be that your network is expecting 6 channels instead of 3. Maybe stacking is enabled on the sensor?

@MrOCW could you share your model so we can take a look at it? It would be helpful