So what’s the story? I see references to DrawRendererSettings.SetOverrideMaterial in google, but no sign of it as an actual thing in the current code. I need to draw some stuff with a special shader from a custom camera, can this be done in both SRPs in some common way or do you have to modify the SRP to do it?
In early SRP implementations (first public versions) it was kinda easy to hack it on your own, but forking URP/HDRP is a pretty bad idea (plus the complexity has risen drastically), and I’m really not sure how to do it with vanilla SRP/HDRP.
Would like to know too.
Having to provide a custom version of an SRP is a non starter. So many techniques require rendering custom buffers from alternative camera views, not sure how this is viable.
Well I found this:
Which claims it has a custom depth pass example, but damned if I can find it in the repository…
Ah, of course, it’s buried in a “tests” folder. How obvious.
So looks like I have to create one of these custom render passes which injects it into the pipeline, and the user has to assign it somewhere, setup the camera, etc? Sounds like 100 points of failure for users…
And it looks like you can’t use a custom shader, only passes in existing shaders. Ugh.
This is probably a better place to read about custom passes : Redirecting to latest version of com.unity.render-pipelines.high-definition
(don’t you love that docs re now all over the place and with different systems?)
AFAIK custom pass is more intended as a way to insert your own post effect into HDRP’s post effect stack (or get a buffer at a specific point in rendering and do stuff to it) and less intended for replacement shaders.
But maybe it can be done with DrawingSettings.overrideMaterial in a custom pass. (read the section “Calling DrawRenderers inC#” (sic) in the link)
Also, if you want a universal URP/HDRP solution, this won’t do, since URP doesn’t have equivalent custom pass functionality.
So the problem I’m having with Custom Pass, is once I set it up, it no longer renders my main camera anymore; rather I see stuff getting drawn into my buffer in the frame debugger, but then the main camera seems to draw from the position of the secondary camera.
And of course, this doesn’t work for URP, which is just a sign that these two teams have no shared plan for what they are doing.
It’s also nice how the docs tell you absolutely nothing about what anything is- just autogenerated script references that show you what overloads the API has. No explanations, no what is this function for or do? And this is “Production Ready”?
@jbooth_1 did you read this reply to a recent thread:
But it looks like there’s no easy replacement shader style way of doing things…
Yeah, for HDRP I’m down the rabbit hole of a custom pass, which should be able to do what I want, but whenever it’s active the main camera doesn’t render from it’s position anymore, rather the position from the custom camera.
protected override void Execute(ScriptableRenderContext renderContext, CommandBuffer cmd, HDCamera camera, CullingResults cullingResult)
{
// if (!render || camera.camera == bakingCamera)
// return;
if (TraxManager.instance == null || TraxManager.instance.cam == null)
return;
Camera bakingCam = TraxManager.instance.cam;
bakingCam.TryGetCullingParameters(out var cullingParams);
cullingParams.cullingOptions = CullingOptions.None;
cullingResult = renderContext.Cull(ref cullingParams);
var result = new RendererListDesc(shaderTags, cullingResult, bakingCam)
{
rendererConfiguration = PerObjectData.None,
renderQueueRange = RenderQueueRange.all,
sortingCriteria = SortingCriteria.BackToFront,
excludeObjectMotionVectors = false,
layerMask = TraxManager.instance.layerMask,
overrideMaterial = TraxManager.instance.replacementMat,
};
var p = GL.GetGPUProjectionMatrix(bakingCam.projectionMatrix, true);
Matrix4x4 scaleMatrix = Matrix4x4.identity;
scaleMatrix.m22 = -1.0f;
var v = scaleMatrix * bakingCam.transform.localToWorldMatrix.inverse;
var vp = p * v;
cmd.SetGlobalMatrix("_ViewMatrix", v);
cmd.SetGlobalMatrix("_InvViewMatrix", v.inverse);
cmd.SetGlobalMatrix("_ProjMatrix", p);
cmd.SetGlobalMatrix("_InvProjMatrix", p.inverse);
cmd.SetGlobalMatrix("_ViewProjMatrix", vp);
cmd.SetGlobalMatrix("_InvViewProjMatrix", vp.inverse);
cmd.SetGlobalMatrix("_CameraViewProjMatrix", vp);
cmd.SetGlobalVector("_WorldSpaceCameraPos", Vector3.zero);
CoreUtils.SetRenderTarget(cmd, TraxManager.instance.depthRT, ClearFlag.Color);
HDUtils.DrawRendererList(renderContext, cmd, RendererList.Create(result));
}
Essentially I can watch this draw into my buffers in the frame debugger, but then the main camera is messed up. If I try passing CoreUtils.SetRenderTarget to null after, it just ends up drawing black for the main camera. And the GetRenderTargetAutoName function just returns some string, so I have no idea how to restore things after I’m done rendering- and the docs don’t say what any of this stuff does.