Get and Set Camera pixels in Unity.

Is there anyway to get AND set the output of a Camera component (or any solution that would mimic this)? For clarity, I want to send literal camera pixel data over a network for my application so that the client would not have to internally process events and would just be viewing them.

I've looked through the Camera API and have not found a way to get/set pixel output. Any reply would be extremely helpful, thanks in advance.

1 Like might be the path you're looking for.

1 Like

If you want to get the pixel data from the screen, there are several methods. But the main one is using tex2D.ReadPixels() to copy data from the currently set render target to a Texture2D.

Inversely you can render to the screen by using Graphics.Blit() with a Texture2D.

However... ReadPixels() is very slow, and sending uncompressed images across a network, even a local network, is also very slow. Hence the reason video compression exists. Unity has a utility for capturing footage in real time (via the Unity Recorder package), but it only works in the editor. And there aren't any tools for sending or playing back those streams over the network. And it's not a trivial thing to get working.

There's a reason services like Parsec exist.


After going through the API this doesn’t seem to be what I am looking for. It does use what Im looking for though. I want to access the data that the camera is sending to the RenderTexture. Which unfortunately you cannot grab from the RenderTexture itself. It is a step closer though, so thanks either way.

1 Like

This seems to be exactly what I was looking for. And thank you for the heads up about the image compression, I wasn’t thinking about it in that way so I would’ve missed it. It somehow flew over my head that a generic 1600 x 900 screen would generate 1,440,000 pixels.

I will try to see if there is a way to modify and rethink my approach, however this method will probably be scrapped due to the speed. But I appreciate the reply either way.

Here's my rough solution which I derived using the comments above (mainly bgolus, thank you!). This takes a grab of the screen (camera), copies RGB (ignoring the alpha) pixel information to a byte array, then sends the array to an external app via tcp every frame. As pointed out above, network code wouldn't support high bandwidth of HD resolution so I limited the screen resolution to 400x400. (Another solution might take the full HD screen resolution and reduce it by taking a pixel, say, every 10 pixels.) A 400x400 resolution is RGB 480,000 bytes.

The following can be attached to any in game object that has a renderer, like a cube.

using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.Net;
using System.Net.Sockets;
using System.Text;
using UnityEngine;

public class CubeNet : MonoBehaviour
    // Use this for initialization
    TcpListener listener;
    string msg;
    bool connected = false;
    TcpClient client;
    NetworkStream ns;

    public Renderer screenGrabRenderer;
    private Texture2D destinationTexture;
    private Color32[] pixels;
    private static byte[] pixelsComp;

    void Start()
        listener = new TcpListener(IPAddress.Parse(""), 55001);
        print("is listening");

        screenGrabRenderer = GetComponent<Renderer>();
        destinationTexture = new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false);
        screenGrabRenderer.material.mainTexture = destinationTexture;
        Camera.onPostRender += OnPostRenderCallback;
        pixelsComp = new byte[480000];

    void Update()
        if (connected == false)
            if (listener.Pending())
                print("socket connected");
                connected = true;
                client = listener.AcceptTcpClient();
                ns = client.GetStream();
                byte[] bytes; bytes = Encoding.ASCII.GetBytes(msg);
                ns.Write(bytes, 0, bytes.Length);
            byte[] buffer; buffer = new byte[client.ReceiveBufferSize];
            int bytesRead; bytesRead = ns.Read(buffer, 0, client.ReceiveBufferSize);
            if (bytesRead > 0)
                ns.Write(pixelsComp, 0, pixelsComp.Length);

    void OnPostRenderCallback(Camera cam)
        if (cam == Camera.main)
            Rect regionToReadFrom = new Rect(0, 0, Screen.width, Screen.height);
            destinationTexture.ReadPixels(regionToReadFrom, 0, 0, false);
            pixels = destinationTexture.GetPixels32(0);

            //Compile pixel information sequentially in an array
            for (int i = 0, j = 0; i < pixelsComp.Length; i++, j++)
                pixelsComp[i] = pixels[j].r; i++;
                pixelsComp[i] = pixels[j].g; i++;
                pixelsComp[i] = pixels[j].b;

            //System.Array.Reverse(pixels, 0, pixels.Length);
            //destinationTexture.SetPixels32(pixels, 0);

    // Remove the onPostRender callback
    void OnDestroy()
        Camera.onPostRender -= OnPostRenderCallback;