Regression - bad HD Scene Color when used in custom pass Before Post Process injection point.

Hi @chap-unity ,

I’m pinging you because you previously responded to my showcase post about how to render multiple transparent layers, both using fake distortion.

Showcase - How do draw multiple layers of transparent surfaces that include refraction - Unity Forum

As a refresher, it relies on rendering our glass layer in a custom pass Before Post Process, when the second color pyramid has been constructed for distortion purposes.

However, sometime between Unity 2022.3.x (can’t remember the revision) and Unity 2022.3.30, this tactic has broken down because using HD Scene Color in a Shader Graph in a custom pass Before Post Process, gives bad values.

I thought it might be related to upscaling, but this does not appear to be the case. I get bad values with DLSS off or on, and unrelated to scaling factor.

Is there a known regression, or a known work-around? We are totally stuck without the ability to do this, because we have a lot of glass over water.

Update - some further investigation reveals that using HD Scene Color on the second color pyramid (the distortion pyramid), yields bad results because it addresses the entire image, even though the actual color pyramid is only rendered into the bottom part of the weirdly sized buffer (2560x2048).

More explained in this image:

Just to be sure I understand correctly, you are sampling “by yourself” the distortion color pyramid and even though it looks fine, you are actually sampling the black part of it as well and you don’t want to (in 2022.3.30). And this wasnt the case before this version, right ?

I’ve talked quickly with some people and their guess is that the color pyramid is “fine” since we haven’t touched it in a while, especially that we do less and less backports to 2022.3, but maybe the RTHandleScale have changed and does not contain the proper scale leading you to sampling black outside the buffer ?

Correct.


Brief recap of what/why we are doing:
We don’t use the “built in” distortion, because it is a post effect that blurs the final image, which means if you have glass with detailed normal mapping on the surface that gives nice specular highlights, or fine dust on the surface… distortion will blur the whole thing including the surface detail (not just the background).

So instead we have a simple custom pass that renders the glass as a regular lit surface, while showing the blurred background that we read from HD Scene Color (pre exposure color going out through emissive channel with a simple brightness multiplier). It’s really just a pretty simple shader graph.

We inject the custom pass in Before Post Process, because this means the distortion color pyramid is nicely created for us, and it includes our transparent water which was rendered prior in the regular transparent pass.

So this gives us distorted glass, over distorted water, while no surface detail is being lost to blurring :slight_smile:
I hope this gives a good overall explanation.

As for the second part of the question - yes, about a month ago, using the HD Scene Color node would read the correct valid area of the distortion color pyramid, even if the RT itself is larger and the color pyramid is rendered into a subsection of it. The only part I’m not sure of, is if the regression is due to Unity version change, or due to some random thing changed on my side that caused… something… to go wrong.

I don’t know how to control or fix the RTHandleScale if it is indeed wrong.

Anything that causes the RT to rescale bigger than the “used” area, will cause this issue. The most common case for this is when using some kind of upscaling. A simple test putting the upscale factor at 0.5, will generate a distortion color pyramid that is rendered into the bottom-left quadrant of the RT. Then using HD Scene Color node, will read the entire image corner to corner, including the 75% of black pixels and the 25% of actual valid color pyramid.

9907731--1431567--hdscenecolor-in-custom-pass.jpg

Update:

I can work around the issue by delaying the upscaling for a short while. In this case, DLSS is initialized but I’m keeping the scale factor at 1.0 until at least a few frames in. Then I switch the scaling to 0.5 (factor 2.0) and just like magic, everything works an the custom pass / HD Scene Color node is no longer getting confused about what region of the color pyramid RT to read from.

So I suspect there is some fragile logic going on in HDRP in relation to the upscaling factor and RTHandles system. I mean… the frame in which I start applying an upscale factor should not be able to break shader graphs that use the HDSceneColor node.

Update 2:
Furthermore - changing the upscale factor again while playing, to a greater factor than the initial factor set, will immediately cause the problem to come back.

For example… start with factor 1.0, then after a few frames set to factor 2.0 (my work-around as described above). We now render in half-res, and everything works. We can now dynamically switch to factors such as 1.0, 1.5, 1.7 and 2.0. But if we switch to let’s say factor 3.0, then the HDSceneColor node fails again in the shader graph, reading from invalid areas of the RT.

My next idea is to grab the scene color myself in a custom pass and build my own blurred mips, instead of relying on the generated color pyramids + HDSceneColor.

An update on this one @chap-unity :

(please also see my findings above in case you haven’t)

To summarise briefly based on all my latest investigations:

  • The core issue is that using the HD Scene Color node (shader graph), in a custom pass with Before Post Processing injection point, is not reliable (bad coordinates).
  • It doesn’t much matter what type of custom pass it is. In this case, it is a simple DrawRenderersCustomPass that renders objects having a certain material and abovementioned shader that uses HD Scene Color.
  • When doing the above in any other injection point, HD Scene Color can be reliably used, so the issue relates specifically to the Before Post Processing injection point.

There are two circumstances that will trigger the bad read from HD Scene Color:

  1. Initialising DLSS or other upscaling with a upscaling value other than 1.0 in the first frame (let’s say in some Awake() method). You have to delay changing the upscaling factor.
  2. Using a play mode resolution that doesn’t match your display native resolution. For example I have a 4K display, but when I enter play mode in QHD 2560x1440, HD Scene Color will give a bad result.

What is meant by “bad result”, is that the color pyramid buffer will be larger than the valid area in which the color pyramid is rendered, but HD Scene Color will read from the entire area, which includes unwanted areas top-right of the buffer.

I hope this helps and that somebody from Unity can help to fix this - as it is a new problem that started sometime during the 2022.3.x LTS phase.

Hey, thanks again, it took me some time but I finally managed to spend a few minutes there to discuss with the right devs. It appears that something is broken indeed and I managed to create a simple repro on a fresh project.

The only thing that does not makes sense to us is how it was working before (as you mentionned in your post, somewhere in 2022.3.X) since there’s very few things that has been backported recently and even less in that area of the code… so if you have the proper version so that we can confirm it’s a regression, that could also help.

I’ve logged something here, so that you can follow the process if I forgot to update here about it. Can’t give you any estimate since our team is running super low on engineers but we’ll do our best :).

Have a nice day.

1 Like

It could be things on my side changing that puts us into or outside of some lucky edge case where it works. For example I know right now it works when I run in 4K, but when I switch to QHD it doesn’t. So maybe for a while I was running 4K and didn’t notice any issue. As for the weird work-around I have to do in delaying the upscaling factor for a few frames, I have no idea why I didn’t have to do this before, but now I have to.

Also yesterday, I went into the same scene and ran QHD, and everything worked. Later on, it didn’t.

So the inner details of conditions where it works or breaks down, that’s beyond the level of detail I can understand or analyse :face_with_spiral_eyes:

1 Like

FYI @chap-unity I am also seeing this issue - I have one Shader Graph (full screen postpro) that works perfectly in one project, I ported it over to a new project, ensured that all the graphics settings for HDRP matched (on Unity 6 preview), and it simply refuses to return anything but (0,0,0). Even a simple “HD Scene Color” node in a blank ShaderGraph returns black.

The buffer itself in Rendering Debugger looks fine/normal, but I get a completely black output no matter what.

This is a fairly big issue for anyone making PostFX in HDRP.

This is what Render Graph looks like on the broken project:

Weirdly, on the “working” project, Render Graph doesn’t show my PostFX as reading any color buffers, when it clearly is…

Good morning, I wonder if there is an update or rough ETA on this one? Unity Issue Tracker - [HDRP] Wrong scene-color sampling when injecting a custom pass on Before PostProcess (unity3d.com)

I’m sorry, no update or ETA, it’s not forgotten, but our team is spread too thin, so we have to prioritize.

1 Like