Is there any way to do a single Camera.Render() with URP?

Right now, a feature in our game relies on being able to disable some effects, then manually render one camera into a render texture, then re-enable the effects. all in one frame.

Is there any way to do this with the URP or not at all? Camera.Render() hook is listed as “not supported” and without this feature we can’t really upgrade to URP at all.

Also, bonus question: Would our game theoretically be able to run on a Nintendo Switch with the built-in render pipeline?

You can use UniversalRenderPipeline.RenderSingleCamera. That is the similar to Camera.Render

Both URP and Builtin’ pipelines support Nintendo Switch.

2 Likes

Thanks for asking this question, kbm.

What Context do you pass RenderSingleCamera? I’m using it to render one frame of a secondary camera. I can’t find any examples of its use.

Thanks in advance for any assistance!

Geo

2 Likes

I case you haven’t figured this out, I ran into the same problem, just paying forward the help I was offered!

using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;
using RenderPipeline = UnityEngine.Rendering.RenderPipelineManager;
...

private void OnEnable()
{
RenderPipeline.beginCameraRendering += UpdateCamera;
}
...
void UpdateCamera(ScriptableRenderContext SRC, Camera camera)
{
...
UniversalRenderPipeline.RenderSingleCamera(SRC, portalCamera);
...
}
7 Likes

Is there a way to do this from an editor script?

(Just trying to render a texture from a temporary camera and save it to a file, something that used to work with camera.Render() with the classic render pipeline)

edit: Got it working after discovering the editor coroutines package, and using a coroutine to just wait a frame for the camera to render itself

1 Like

The boat attack demo uses camera.Render() with URP, to capture the heightmap in editor and when starting game. So it may works in editor, with extra load.

3 Likes

Interesting. Might have just been a Graphics.CopyTexture (from RenderTexture to saveable Texture2D) not working as expected when I tried that then, as I ended up switching to Texture2D.ReadPixels to get the coroutine version working.

when I used UniversalRenderPipeline.RenderSingleCamera() with this camera.enabled is false,this camera just can’t render ui canvas .Does anyone konw this???

The RenderRequest api should be what you need. It landed in 2022.2 https://github.com/Unity-Technologies/Graphics/commit/76150c8114170b7e2359ccd786c02b22bafdb125

Where is the right place to call SubmitRenderRequest? I get an error in BeginContextRendering:
Recursive rendering is not supported in SRP (are you calling Camera.Render from within a render pipeline?).

Second that, it yet not usable

Would be great to have some updates on this as well

thanks

Hi @yu_yang , you can call SubmitRenderRequest() from your scripts as long as it is called outside of the Unity render loop.

As you noticed, we currently prevent its usage in functions called by the render loop to avoid recursive rendering trouble, but we plan to address that restriction.

RenderRequest API works for both URP/HDRP. Here is one example: Render Requests | Core RP Library | 17.0.3

Let me know if it helps!

Hi, thanks.

So is better to use the RenderSingleCamera() for now as the SubmitRenderRequest() is not yet working inside the render loop and will be updated later ?

Because removing it from the render loop changes a lot the functionality of the program.

Looking forward to some updates on this. Currently stuck to have world space Canvas render in to a renderTexture. SubmitRenderRequest makes them render alright but calling that from within beginCameraRendering throws an error (even though everything looks and works allright)
So I would really like to be able to do a SubmitRenderRequest from within a BeginCameraRendering without the error. Moving the SubmitRenderRequest outside of the rendering thread (in lateUpdate for example) works but makes my planar reflections lag by one frame.

What really sucks about this is having to wait for the SRP to update the render makes tooling much more complex.

In BiRP, I can simply render something to a texture and use it. Now if I want to do something like this, I either need to move my entire tooling into the rendering system, or come up with some way to serialize the data instead of rendering it on the fly, and defering updates until after rendering happens.

For instance, I render depth buffers from meshes and use them to modify the terrain height when I load the scene in the editor. This works great in BiRP, but in SRPs this data won’t be available until after rendering a frame, which means I can’t update the scene until after the first frame.

I realize this is designed this way to prevent stalling the renderer, which is great for gameplay, but tooling does not care about that and is being saddled with loads of extra complexity because of it. Also, if you try to render just a depth buffer with this technique, you’ll get a null reference exception- so I now have to waste memory on a full color buffer for no reason, which doesn’t seem to be creating a valid depth buffer anyway. Blerg…

Hi,

No, it certainly does not help. Please fix the pipeline so it is usable or just tell us than will remain useless to stop working with it. URP seems like a practical joke at this point, is slow, vastly slower in custom image effects that BiRP, complex, fragmented, render graph will destroy its already problematic API completely, not a single custom effect will work, simply does not work in many cases, has not got major features in Shader Graph etc

If something is to be ready for development in 10-15 years down the line, should not be released and declared as a ready to use system and trick users into making use of it. There is only one usable pipeline currently and for the many years to come and that is BiRP, unless want to make the most basic of visuals and effects.