Working on an iOS project where I need to post process the image coming from the iPhone camera.
WebCamTexture doesnt inherit from RenderTexture nor Texture2D and I can’t find an optimized way to get the texture out of the WebCamTexture in order to apply some post processing effects (shader based).
I would like to do this in realtime and creating a new Texture2D and SetPixels / GetPixels is very slow.
I also read about passing the pointer but it definitely doesn’t work. (it crashes as people mention)
Is there an alternative ? Im not an expert in Unity. I hope there is a way to achieve this with reasonable performances ?
This is working on my Windows laptop without any trouble. I haven’t tried putting it on iOS yet.
public class Script : MonoBehavior {
WebCamTexture webcam;
Texture2D output;
Color32[] data;
IEnumerator Start() {
yield return Application.RequestUserAuthorization(UserAuthorization.WebCam);
if (Application.HasUserAuthorization(UserAuthorization.WebCam)) {
webcam = new WebCamTexture();
webcam.Play();
output = new Texture2D(webcam.width, webcam.height);
GetComponent<Renderer>().material.mainTexture = output;
data = new Color32[webcam.width * webcam.height];
}
}
void Update() {
if (data != null) {
webcam.GetPixels32(data);
// You can play around with data however you want here.
// Color32 has member variables of a, r, g, and b. You can read and write them however you want.
output.SetPixels32(data);
output.Apply();
}
}
}
Works perfectly for me.
Comments are encouraging too
Thank you @ArtOfWarfare
Can we add a delay? like 800ms or a variable delay and show output after that delay … @ArtOfWarfare
If anyone is still searching for this issue. I was able to solve this.
Enable URP.
Make a shader graph.
Make a material from the shader.
Apply the material to the Raw image and render webcam texture on the raw image.
Any changes you make to the shader are directly reflected in the raw image with almost no loss in performance.