Regression - bad HD Scene Color when used in custom pass Before Post Process injection point

Hi @chap-unity ,

I’m pinging you because you previously responded to my showcase post about how to render multiple transparent layers, both using fake distortion.

Showcase - How do draw multiple layers of transparent surfaces that include refraction - Unity Forum

As a refresher, it relies on rendering our glass layer in a custom pass Before Post Process, when the second color pyramid has been constructed for distortion purposes.

However, sometime between Unity 2022.3.x (can’t remember the revision) and Unity 2022.3.30, this tactic has broken down because using HD Scene Color in a Shader Graph in a custom pass Before Post Process, gives bad values.

I thought it might be related to upscaling, but this does not appear to be the case. I get bad values with DLSS off or on, and unrelated to scaling factor.

Is there a known regression, or a known work-around? We are totally stuck without the ability to do this, because we have a lot of glass over water.

Update - some further investigation reveals that using HD Scene Color on the second color pyramid (the distortion pyramid), yields bad results because it addresses the entire image, even though the actual color pyramid is only rendered into the bottom part of the weirdly sized buffer (2560x2048).

More explained in this image:

Just to be sure I understand correctly, you are sampling “by yourself” the distortion color pyramid and even though it looks fine, you are actually sampling the black part of it as well and you don’t want to (in 2022.3.30). And this wasnt the case before this version, right ?

I’ve talked quickly with some people and their guess is that the color pyramid is “fine” since we haven’t touched it in a while, especially that we do less and less backports to 2022.3, but maybe the RTHandleScale have changed and does not contain the proper scale leading you to sampling black outside the buffer ?

Correct.


Brief recap of what/why we are doing:
We don’t use the “built in” distortion, because it is a post effect that blurs the final image, which means if you have glass with detailed normal mapping on the surface that gives nice specular highlights, or fine dust on the surface… distortion will blur the whole thing including the surface detail (not just the background).

So instead we have a simple custom pass that renders the glass as a regular lit surface, while showing the blurred background that we read from HD Scene Color (pre exposure color going out through emissive channel with a simple brightness multiplier). It’s really just a pretty simple shader graph.

We inject the custom pass in Before Post Process, because this means the distortion color pyramid is nicely created for us, and it includes our transparent water which was rendered prior in the regular transparent pass.

So this gives us distorted glass, over distorted water, while no surface detail is being lost to blurring :slight_smile:
I hope this gives a good overall explanation.

As for the second part of the question - yes, about a month ago, using the HD Scene Color node would read the correct valid area of the distortion color pyramid, even if the RT itself is larger and the color pyramid is rendered into a subsection of it. The only part I’m not sure of, is if the regression is due to Unity version change, or due to some random thing changed on my side that caused… something… to go wrong.

I don’t know how to control or fix the RTHandleScale if it is indeed wrong.

Anything that causes the RT to rescale bigger than the “used” area, will cause this issue. The most common case for this is when using some kind of upscaling. A simple test putting the upscale factor at 0.5, will generate a distortion color pyramid that is rendered into the bottom-left quadrant of the RT. Then using HD Scene Color node, will read the entire image corner to corner, including the 75% of black pixels and the 25% of actual valid color pyramid.

Update:

I can work around the issue by delaying the upscaling for a short while. In this case, DLSS is initialized but I’m keeping the scale factor at 1.0 until at least a few frames in. Then I switch the scaling to 0.5 (factor 2.0) and just like magic, everything works an the custom pass / HD Scene Color node is no longer getting confused about what region of the color pyramid RT to read from.

So I suspect there is some fragile logic going on in HDRP in relation to the upscaling factor and RTHandles system. I mean… the frame in which I start applying an upscale factor should not be able to break shader graphs that use the HDSceneColor node.

Update 2:
Furthermore - changing the upscale factor again while playing, to a greater factor than the initial factor set, will immediately cause the problem to come back.

For example… start with factor 1.0, then after a few frames set to factor 2.0 (my work-around as described above). We now render in half-res, and everything works. We can now dynamically switch to factors such as 1.0, 1.5, 1.7 and 2.0. But if we switch to let’s say factor 3.0, then the HDSceneColor node fails again in the shader graph, reading from invalid areas of the RT.

My next idea is to grab the scene color myself in a custom pass and build my own blurred mips, instead of relying on the generated color pyramids + HDSceneColor.

An update on this one @chap-unity :

(please also see my findings above in case you haven’t)

To summarise briefly based on all my latest investigations:

  • The core issue is that using the HD Scene Color node (shader graph), in a custom pass with Before Post Processing injection point, is not reliable (bad coordinates).
  • It doesn’t much matter what type of custom pass it is. In this case, it is a simple DrawRenderersCustomPass that renders objects having a certain material and abovementioned shader that uses HD Scene Color.
  • When doing the above in any other injection point, HD Scene Color can be reliably used, so the issue relates specifically to the Before Post Processing injection point.

There are two circumstances that will trigger the bad read from HD Scene Color:

  1. Initialising DLSS or other upscaling with a upscaling value other than 1.0 in the first frame (let’s say in some Awake() method). You have to delay changing the upscaling factor.
  2. Using a play mode resolution that doesn’t match your display native resolution. For example I have a 4K display, but when I enter play mode in QHD 2560x1440, HD Scene Color will give a bad result.

What is meant by “bad result”, is that the color pyramid buffer will be larger than the valid area in which the color pyramid is rendered, but HD Scene Color will read from the entire area, which includes unwanted areas top-right of the buffer.

I hope this helps and that somebody from Unity can help to fix this - as it is a new problem that started sometime during the 2022.3.x LTS phase.

Hey, thanks again, it took me some time but I finally managed to spend a few minutes there to discuss with the right devs. It appears that something is broken indeed and I managed to create a simple repro on a fresh project.

The only thing that does not makes sense to us is how it was working before (as you mentionned in your post, somewhere in 2022.3.X) since there’s very few things that has been backported recently and even less in that area of the code… so if you have the proper version so that we can confirm it’s a regression, that could also help.

I’ve logged something here, so that you can follow the process if I forgot to update here about it. Can’t give you any estimate since our team is running super low on engineers but we’ll do our best :).

Have a nice day.

1 Like

It could be things on my side changing that puts us into or outside of some lucky edge case where it works. For example I know right now it works when I run in 4K, but when I switch to QHD it doesn’t. So maybe for a while I was running 4K and didn’t notice any issue. As for the weird work-around I have to do in delaying the upscaling factor for a few frames, I have no idea why I didn’t have to do this before, but now I have to.

Also yesterday, I went into the same scene and ran QHD, and everything worked. Later on, it didn’t.

So the inner details of conditions where it works or breaks down, that’s beyond the level of detail I can understand or analyse :face_with_spiral_eyes:

1 Like

FYI @chap-unity I am also seeing this issue - I have one Shader Graph (full screen postpro) that works perfectly in one project, I ported it over to a new project, ensured that all the graphics settings for HDRP matched (on Unity 6 preview), and it simply refuses to return anything but (0,0,0). Even a simple “HD Scene Color” node in a blank ShaderGraph returns black.

The buffer itself in Rendering Debugger looks fine/normal, but I get a completely black output no matter what.

This is a fairly big issue for anyone making PostFX in HDRP.

This is what Render Graph looks like on the broken project:

Weirdly, on the “working” project, Render Graph doesn’t show my PostFX as reading any color buffers, when it clearly is…

Good morning, I wonder if there is an update or rough ETA on this one? Unity Issue Tracker - [HDRP] Wrong scene-color sampling when injecting a custom pass on Before PostProcess (unity3d.com)

I’m sorry, no update or ETA, it’s not forgotten, but our team is spread too thin, so we have to prioritize.

1 Like

Hi @chap-unity It’s me again.

Just following up on this one once more.

I’m taking a fair bit of heat to get this resolved, since none of our glass windows, raindrop effects or anything else that relies on HD Scene Color, will work correctly alongside DLSS.

If we were not on an LTS release, I would shut up and move on :slight_smile:
But, since 2022.3.x is LTS, we absolutely should be able to rely on a backported bug fix.

Is there any way this one could be looked at soon?

Unity Issue Tracker - [HDRP] Wrong scene-color sampling when injecting a custom pass on Before PostProcess

Thanks in advance

1 Like

It’s already fixed, the PR has hit a few bumps along the road but I’ll continue pushing for it to land in 22.3
:crossed_fingers:

If it’s really time senstive on your side, I could send you the fix and you could modify on a local version of your HDRP package.

1 Like

We’re running a modded copy of HDRP already (e.g. the tweak that gives us clouds/sky reflection for transparent SSR, and a few more bits).

So yeah, I’m more than happy to copy and paste a temp fix in myself.

DM?

Resolved in 2022.3.59

Hey, the latest version I can see is 2022.3.58 (in which the problem persists), is this a typo? Or do you have early access to this version?

2022.3.59 is on its way, not released yet.

Issue tracker confirms the fix here:

Unity Issue Tracker - [HDRP] Wrong scene-color sampling when injecting a custom pass on Before PostProcess

1 Like

Hello, we’ve hit this issue with our fullscreen picking outline pass (which needs scene color to desaturate the objects being picked).

I’ve tested the fix by upgrading the project to 2022.3.62f1, however the bug is still present with the following repro:

  • Enter playmode with the GameView window set to a random size, lower than the screen size. Everything renders fine, the textures bound to the _ColorPyramid id flip between a ***_CameraColorBufferMipChain and DistortionColorBufferMipChain texture, with sizes in sync with the rest of the buffers:


  • Maximize the GameView and notice how the HDSceneColor flips between incorrect and correct sampling, depending on what texture gets bound to the _ColorPyramid. However, in this case, the _CameraColorBufferMipChain seems out of sync with the rest of the texture sizes bound to the pass:


Basically, only when the texture bound to the _ColorPyramid is the DistortionBuffer (which seems to have the correct size after maximizing the view), everything renders fine, the uv and render scale values applied to them seem correct.

*Edit: The repro is also valid during builds where you start with a custom lower resolution and change to a higher one (windowed / fullscreen).

Any thoughts on this? I’m currently trying to intercept the color buffer binding, to try and adjust the render scale manually (i’ve basically done a custom shader graph node that does exactly what the HDSceneColor node is doing, just with an extra scale factor added to the uv * _RrHandleScale.xy), but it’s proving difficult to access the handle bound by the RenderGraph builder, as I don’t have access to it in the custom pass.

Realized that, while my attempt to fix the issue was looking for the right culprits, I wasn’t applying the fix where I was supposed to, so I modified our custom HDSceneColor function to use an additional float2 vector input, which specifies the correct size of the buffers incoming into the pass and to grab the size of the texture resource actually bound to _ColorPyramidTexture. Then we use the ratio of those sizes to adjust the RTHandleScale.xy when sampling from the pyramid:

_ColorPyramidTexture.GetDimensions(0, width, height, elements, levels);
float2 pyramidSize = float2(width, height);
float2 scaleAdjustment =  passBufferDimensions / pyramidSize;
output = SAMPLE_TEXTURE2D_X_LOD(_ColorPyramidTexture, s_trilinear_clamp_sampler, uv * _RTHandleScale.xy * scaleAdjustment, 0).rgb;

I send the passBufferDimensions from the pass into the material and it seems to fix the problem (that scaleAdjusment basically kicks in to solve the buffer size mismatch which I think should be already found in the RTScaleHandle.xy values)