Hi, So I’m struggling to figure out how to render some meshes to a render texture immediately in URP.
In the built-in pipeline you can build a command buffer and call Graphics.ExecuteCommandBuffer
However, in URP, doing this results in pink materials.
I want to just draw some objects to a texture at any time and have that function return a Texture2D.
I see that using RenderPipelineManager.beginContextRendering does work, but that means I can’t return a texture immediately.
What do I need to do, in order to render some meshes at anytime to a texture in URP?
In order to draw that to a texture rather than the main buffer I belive you just need to set your texture as the active texture prior to calling DrawRenderers().
I mucked around with this a year ago so I may have some details incorrect.
Most likely a terrible idea, but this worked for me:
Add an Action to RenderPipelineManager.beginCameraRendering, as an example I’m calling mine “RenderNow”, containing something like:
private void RenderNow(ScriptableRenderContext context, Camera camera)
{
// initialize [_renderingCamera] with the perspective you want to render from
if (camera == _renderingCamera)
{
var cmd = CommandBufferPool.Get();
// Set RT
// cmd.SetRenderTarget(rt);
// Clear RT
// cmd.ClearRenderTarget(true, true, Color.black);
// Your renderer and Material here, any Draw function from the command buffer should be usable
// cmd.DrawRenderer(renderer, material);
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}
}
Disable the [_renderingCamera] GameObject, and call _renderingCamera.Render whenever you want to render.
Turns out Graphics.DrawMesh works within that callback, and is more useful to me, as I can specify the layer and camera to draw the meshes into.
Basically, I’m trying to render a static preview of some meshes isolated from other scene elements (lighting, shadows) etc. So while the command buffer method does work, it unfortunately includes lighting and shadow information from the current scene.
Would be nice if there was a way to render some meshes excluded from all scene lighting and provide my own lighting setup… maybe it’s possible for a command buffer to override some internal values, like light color, light direction, shadow map etc. But not sure
I’m not sure if you can override Unity’s shader values using command buffers either, but one option is to guard the lighting, GI, etc, calculation just like how you can define [SHADERGRAPH_PREVIEW] in shadergraph shaders.
You’ll either need to modify the URP package or modify your custom shader,
but you can have something like this in your shader code:
You could then use Material.EnableKeyword before calling Graphics.DrawMesh.
Don’t forget to add #pragma multi_compile or shader_feature if you use this method.
It is rendering unlit objects or particles to a render texture in a defined volume, from top down, basicaly like an ortographic camera. It can be turned into a perspective cam by modifing some matrix I used, and it can save the overhead from adding a new camera and save some layers.
This is some early experimentation I used while developing my asset, the veg engine, I m not sure what s the state on it, it might me messy…
If you want to suppot all pipelines with an unlit shader, a nice trick is to remove the LightMode tag. I hope it helps
Hi,I am also plagued by the same problem.I want to render terrain depth in single rendertexture use “DepthOnlyPass”.
Currently I only found one way to successfully write Mesh to RenderTexture:
It won’t work if i don’t use callbacks.
Did you find a solution?
Could you please explain how to render to a render texture from an additional camera? In BiRP I used camera.Render method on a disabled camera to render only once when I need. But it looks like it doesn’t work in URP.
I think I just got it working. I created a new Universal Renderer asset, added it into the URP asset Renderer List, and then selected it for my camera, that I need to render into RT. After all that it works as it was in the BiRP - just call myCamera.Render().
I tried to do it using this method, but didn’t understand how it works. In the example from the thread it looks like I have to subscribe to the RenderPipeline.beginCameraRendering, and then call the UniversalRenderPipeline.RenderSingleCamera from there, but it executes in a loop, not only once as I needed.