Calling Camera.Render() with RenderTexture

Hello, we are currently trying to make camera view on a quad, here is sample code:

    void Start()
    {
        int size = 1024;
        _rendererTexture = new RenderTexture(size, size, 16, RenderTextureFormat.ARGB32);
        _targetRenderer.material.SetTexture("_BaseMap", _rendererTexture);
        _camera.targetTexture = _rendererTexture;
    }
    
    void Update()
    {
        if (!_copyTextures)
        {
            _currentDelay += Time.deltaTime;
        
            if (_currentDelay > 5f)
            {
                _copyTextures = true;
                Debug.Log("Start copying textures");
            }
        }
        else
        {  
            _camera.Render();
        }
    }

When running in simulator nothing happens and it just stays black.

Is there a way to fix this or a different way to achieve the effect?

This should work, as far as I know. However, one additional step to try would be to manually dirty the render texture each frame after rendering, signaling to PolySpatial that it should be transferred. You can do this with Unity.PolySpatial.PolySpatialObjectUtils.MarkDirty(_rendererTexture).

1 Like

Adding this line fixed it, thanks you!
Unity.PolySpatial.PolySpatialObjectUtils.MarkDirty(_rendererTexture)

2 Likes

I’ve been exploring the process of configuring a secondary camera to output its feed to a render texture and then applying that texture to a quad using a VisionOS-compatible material.

Despite multiple attempts, the texture consistently appears black on the material. I’ve experimented with various materials, camera and texture settings, to no avail. I’ve also tried the “Unity.PolySpatial.PolySpatialObjectUtils.MarkDirty(_rendererTexture)” method, but unfortunately, it didn’t resolve the issue.

Additional context:
I’m working within a mixed reality environment, utilizing a bounded camera setup.

My query is:
Do polyspatial environments support the use of render textures and secondary cameras?

Any guidance on this matter would be highly valuable!

They should, yes. One piece you might be missing is manually calling Render on the camera. This is necessary because visionOS builds in run in batch mode, and cameras are not rendered every frame as in the usual Unity run loop. In our testing, we use the following script to do this:

using UnityEngine;

public class BatchModeUpdateRenderer : MonoBehaviour
{
    Camera m_Camera;

    void Start()
    {
        m_Camera = GetComponent<Camera>();
    }

    void Update()
    {
        if (Application.isBatchMode && m_Camera)
            m_Camera.Render();
    }
}
1 Like

That was it, thanks Kapolka!