Any examples of Scene Color being used?

Under the built-in rendering, I used to be able to create a lensing effect using (I believe) the shader shown here, which I believe is the refractive glass shader that was/is included in standard assets:

This allowed me to create a material that would refract the background behind an object. In this example, you can see the door behind the purple object appears to be bent:

4793051--457982--upload_2019-7-27_23-1-30.png

This all seemed like magic to me. The refraction is happening on a plane that bisects the sphere here, and I never really understood how it didn’t result in seams at the edges, but it worked really well.

Unfortunately, under HDRP, there’s no GrabPass, so this effect stopped working. I kept waiting for some replacement, and my understanding was that maybe the Scene Color node would be a similar replacement. I’ve tried using it, but I don’t yet understand whether it will work similarly to grabpass. Specifically, with the grabpass approach, the refracting object would appear to be a completely clear object in the scene if no normal map was given.

In trying to reproduce this under ShaderGraph, I believe I need to start by creating a shader that scales and offsets the UV of the object based on its scale and position in the camera. I was wondering if anyone know how that could be achieved? My first thought was to take the Object node, and multiply or divide by the Scale, and use that as the Tiling override for the UV, but I’m not sure how to take the position of the object relative to the camera into account. So, in short, does anyone have an example of Scene Color being used that that it creates a mainly invisible object in scene, which I could then extend and tweak?

Thanks.

I’m making some progress on the using Scene Color, but there’s something I don’t understand. Maybe someone can assist.

The color output of the Scene Color node seems wrong. I created the following simple shader:

When I create a material with this shader and add it to a cube, the color of the cube is noticeable different than the background color. Here you can see the lighter square is the cube using the Scene Color node shader, while the darker blue is the background:

4794305--458177--upload_2019-7-28_13-10-46.png

So that’s confusing. Anyone know why that would be the case? I’ve tried changing as many settings as I could, and it doesn’t change this appearance.

1 Like

Is your shader set to additive, or alpha blend?

I’ve tried Additive, Alpha Blend, and Pre-Multiply. Additive looks extra blown out, while the other two look like the screenshot. I’ve tried this in 2019.1 and 2019.3 alpha, but I get the same results.

4799375--458795--upload_2019-7-29_18-20-25.png

It’s also highly possible I don’t have an understanding of how Scene Color should be used. I’m trying to recreate the effect I had when using GrabPass, where I more or less want all of the content of a chunk of the screen (the region beyond a plane facing the camera), which I can then distort arbitrarily via the shader. Maybe that’s not what Scene Color is for…

What version of the HDRP are you using? Testing locally with LWRP 5.16 in 2019.1 it “just works” and the object is completely invisible.

However HDRP 5.16 it is too bright, but also much, much more obviously so. Maybe just the background color you have is making it less obvious.
4802705--459329--upload_2019-7-30_10-51-19.png

I’ve been complaining about the Unlit Master node’s handling of alpha for a while, so it could be related.

I was using HDRP 5.16 under 2019.1, and 7.0.1 under 2019.3 (and found the results to be the same between both.)

I haven’t tried a LWRP project, to compare whether the behavior is different between LWRP and HDRP.

In the case where you say it “just works”, you created a new shader and plugged Scene Color into the Color output, created a material from that shader, (presumably set the material to transparent), and then just dropped the material onto a sphere?

Yep. You could not tell there was a sphere there. I offset the UVs a little to make sure it actually was there, and it was.

I think in the HDRP the Unlit Master node might be straight up broken, which is weird. Changing between Additive and Alpha in HDRP 5.16 looked identical which it absolutely should not, Additive should look way brighter, which means the Color value is blending as if it’s Additive regardless of the blending mode?! Either the expectations for what the Color input does between LWRP and HDRP are different, or the HDRP is broken.

Anyway, a work around:
Use the HDRP Unlit Master node (which you already are)
Don’t connect anything to the Color input and set it to a solid black.
Connect the Scene Color node to the Emission.

5 Likes

Very cool. Connecting the Scene Color to emission, and setting Color to black gives the precise result I was expected. i should have tried that. Thanks very much for the advice. I’ll submit a bug for the unexpected behavior of connecting Scene Color to the main Color output.

I’m finally getting back to this, and I’ve hit a conceptual stumbling block that makes me wonder if my approach can work at all.

Thinking back to how things worked with GrabPass, the magic there seemed to be that the GrabPass data contained the color of each pixel behind the object with the material on it. The color data was captured during the GrabPass pass, and provided to the next pass in the shader, as two independent things.

Using SceneColor, however, doesn’t seem to work the same way. Instead of capturing what’s “beyond” the object with the material on it, the current appearance of that object is reflected in SceneColor. The result is a feedback loop, with an effect similar to when old versions of Windows 95 would freak out and stop repainting the screen properly (with duplicates of the window being repainted all over the screen.) For example, here’s a screenshot of the plane I have that’s rendering just the SceneColor node. I’ve tagged the corners of the plane in red. You can see the feedback loop effect:

4837724--464477--upload_2019-8-8_21-2-47.png

So, if I want SceneColor to contain what is beyond the plane, then I’d need to put the plane on a layer the camera can’t see. But then the plane won’t show up on the camera at all. And I believe that having multiple cameras (layers on top of each other) is no longer supported under HDRP.

So, I’m left with the question: Is there a way to make this work at all? I don’t believe ShaderGraph supports multiple passes, but it seems to me that the only way for this to work is if I capture SceneColor in one pass, and render the plane only in the second pass.

The scene color node samples from the _CameraOpaqueTexture. The name of which should give a clue to the fact it contains everything from the opaque pass.

Basically, the old GrabPass would copy the current render target’s contents into another texture so it can be sampled from either when the grab pass shader “pass” would get rendered (it doesn’t actually render anything). In the case of named grab passes it would only make a copy on the first time that name appears and later grab passes of the same name are ignored.

Copying the screen contents is slow as it causes the whole GPU to stall for a moment, so grab passes are incredibly inefficient. So instead the SRPs reuse the copy already being made after the opaque queue that post processes like AO use. If you enable the opaque texture in the SRP’s asset it keeps that copy around for transparent queue objects to sample from. Optionally applying some minor amount of processing it, like downsampling it or otherwise lightly blurring it.

I did end up getting something almost looking reasonable, which required changing the render pass to “After Post-processing”. (All of the other render pass options resulted in the camera “seeing” the mesh and causing feedback.) I haven’t got the tiling and offset correct (assuming it’s actually possible to be correct in this case), but this sort of looks like what I’d want:

Weirdly, the quad is rendering a little brighter than expected again under After Post Processing, maybe because the rest of the scene has been affected by post-processing and the quad is not, I’m not sure. I am using the approach where I output the color to the Emission output rather than the Color output.

Anyway, even if it did look good, unfortunately it captures stuff in the foreground, which definitely isn’t what I want. Here the player’s hand is being rendered into the quad:

4838081--464546--upload_2019-8-9_0-7-43.png

So I’m not sure whether what I’m trying to accomplish (distorting everything behind a plane) is even possible at this point. Or, this feels like the wrong approach, and I should be trying to make use of built-in SRP distortion functionality.

Hi dgoyette,

have you managed to get the effect you wanted?

I also need some kind of surface effect that distorts everything behind a plane. So far I’ve got the last screenshot you’ve uploaded. I’m using HDRP 7.1.8 and Unity 2019.3.5f

Not yet. I’m still waiting on a few bugs in 2019.3 to be fixed before I migrate over, at which point I’ll take another pass at this.

Hi dgoyette, if I understood your question correctly you want to make an unlit material shader that renders like a perfect piece of glass without any color modification to then be applied with a distortion effect.

I was looking for this effect a minute ago as well and was able to achieve it with the following setup (btw material surface type must be Transparent):

the result is a completely transparent object with modifiable UV coords for distortion:

2 Likes

Thanks for reminding me to come back to this. I revisited this last month, and found that the simplest approach that met my needs was actually to use VFX Graph for this, using its “Output Particle Distortion Quad”. In my case, I wanted symmetric effect, rather than something I could use on an arbitrary mesh, so the quad-based approach worked fine. Basically, just give it a circular/conical normal map, and the effect is complete:

Here you can see it distorting the parallel lines of the wall and floor:

The caveats are the same as things always were under GRABPASS in the built-in renderer: you’ll get weird behavior if this is used on the edge of the screen, and you’ll get weird behavior if this is used next to a sharp certain geometry, like objects being in the foreground. I only using this effect sparingly, and only for about 1 second at a time, so the weird cases are forgivable.

3 Likes

You are an absolute legend and have saved my progress! :smile: Thank you so much for that exposure multiply, that was exactly what was missing even though i don’t yet understand why. Just started with HDRP and you are a hero to us noobs.

1 Like

If it helps, I tried this in URP with what @bgolus proposed and it worked perfectly. Didn’t need any exposure correction (in URP you don’t have such node) but had to make sure that the shader is unlit to avoid a similar effect

I am using HDRP 13.1.8 in 2022.1.1f1. I was trying to work out how to do rain drops on a window with distortion. Many solutions I have seen just adjust the normal map, but that just affects the shine (as far as I can tell). No distortion actually occurs. Its okay, but I was trying to do better. Distant shot:

8397879--1108524--upload_2022-8-28_14-51-37.png

Close in shot. You can see “shine” on it, but no distortion.

8397879--1108530--upload_2022-8-28_14-52-36.png

I found this thread, and managed to get something going (I am a beginner with shader graphs - this thread is great!)

The outer texture is 50% grey (0.5) with a light (1) then dark (0) range. The remap node effectively subtracts half then scales it weaker, which I add to the screen position.

The result:

But i I am understanding, it is sampling the SCREEN space, not what is actually behind the object. So things in front of the object with the shader, if in the camera view, can be seen in the layer.

8397879--1108545--upload_2022-8-28_14-59-41.png

So its not distorting what you can see through the pane of glass, but rather distorting what the camera is capturing. I assume that is because it is sampling the “screen color” with UV offsets, not passing light through with distortions. So close, but no cigar for a window with raindrops on it.

Is a window pain with raindrops distorting what you see through the window possible? Thanks!

Correct.

This isn’t raytracing or path-tracing where you can do real refractions. Distorting the screen color is how refractions are done with rasterization. If you don’t want things in front of the glass to show up in the refraction, you need to check the depth of the values and reject ones closer to the camera than the refraction. If it’s a very important refraction you can use a render texture and a camera that only renders what’s past the glass, but that gets very expensive quickly.

You also want to use a normal that you transform into view space, and then use that to distort the screen UVs (make sure to scale the distortion by the aspect ratio and world distance).

I have been working on a Shader Graph water shader which includes refraction and faced issues with front objects being refracted in the water. However, this tutorial had a solution for that specific issue. Highly recommend checking it out: