Alternatively, the objects could be rendered in two passes, once with using front faces + no face offset, and once using back faces and with some offset. But I assume this is not possible with shader graph.
Hi, it’s impossible to connect “Is Front Face” node with position output in both old shader graph and current shader graph, because the calculation of position output is done in vertex shader.
I think what you need is the pixel depth offset in HDRP shader graph fragment stage.
In HDRP, you’d use PixelDepthOffset, to bring a pixel closer or further away from the camera. This is functionally equivalent to being able to output fragment depth (in addition to color) from a regular fragment shader.
However, as far as I’m aware this isn’t supported in either the Built-in pipeline or URP (for no good reason that I can think about, since you can still do this just fine by hand-writing the shader in both pipelines, just not when using ShaderGraph).
Thank you for the answers. (I’m sorry for the late reply, I just got a notification mail now).
And I’m sorry for the lousy wording, I indeed wanted to change the depth and not the whole position.
I ended up rendering the according objects a second time with the RenderObjects renderer feature, using a handwritten shader, with hardcoded depth offset. I had to use the same material for all back faces, which ended up being ok.
You can also try requesting “Pixel Depth Offset” feature for URP shader graph in 2023 Dev Blitz Day forum. At least you’ll get a response from devs compared to submitting ideas on roadmap.