I’m seeking clarity on two separate issues related to HDR:
What benefits do we get from HDR rendering if we are not using any of the post processing stacks (and aren’t using HDR values in our custom post)? My understanding from the Unity manual is that HDR is primarily useful during post processing, for effects such as bloom, exposure control and finally tone mapping.
What benefits do we get from HDR skybox and reflection probe textures if we aren’t using HDR rendering?
Understand the “HDR” option on the camera just controls whether or not the image the camera records has values outside of “0.0 to 1.0”, but technically you’re always rendering in “HDR”. If you render with a light that’s set to a intensity over 1.0, it’ll still have an effect regardless of if you have HDR enabled on the camera or not. The same is true for the skybox and reflection probes. A non-HDR skybox means it can only ever be as bright as “1.0” (ignoring exposure), and a reflection probe can’t see any objects being lit brighter than “1.0”, they’ll just be clamped to that 1.0 max.
If you use a lot of bright lighting, using HDR rendering has some advantages with transparent objects, especially over very bright surfaces.
This is a basic scene with a directional light set to an intensity of 2.0. The right side of the sphere and the ground, look the same regardless of HDR being on or off. The bright light is blowing out and the details of the texture are hidden. However behind the dark transparent quad (using alpha blended material set to black with an alpha of ~90%) the details of the sphere are retained when HDR is on, but still missing when HDR is off.
Here’s an example of HDR rendering off, but the difference between probe’s HDR setting on and off.
This is a scene with a shaded area and a very, very bright directional light. When HDR for the probe is on, the blown out green wall shows as green in the reflections on the black glossy ball. When HDR for the probe is off, it’s just the same almost white that you can see in the rendered image.
If you never have very bright lights, or otherwise have color values getting blown out (like overlapping additive particles), then HDR isn’t as useful. The most you get is a little bit less banding in some cases with overlapping dark particles.
We do use lights with intensity over 1 frequently, and I think I can see how we would value the dynamic range in our baked reflection probes and skybox textures now, even if we disable HDR rendering for the main camera.
I’d like to make sure I understand how the realtime light values over 1 are used when the camera HDR rendering is disabled. In our example, we are using URP, so some of our realtime lit shaders use this function, to compute diffuse color :
I think the lightColor here is HDR, as it seems to be the intensity times the light color (I am ignoring the shadow and distance attenuation for now, imagine it is a directional light that is not in shadow for a fragment). So, eventually, such HDR values are added with specular lighting, if the shader calls for it, fog, etc, and then returned by the fragment shader. Let’s continue the example of half4 LitPassFragmentSimple, for SimpleLit. And then, at some point, this value will be clamped when it is written to the frame buffer, if HDR rendering is disabled?
Yep, that’s all correct. Everything that happens in the shader itself is being done with the same floating point values regardless of HDR being on or off. The value is clamped by the GPU when writing it to the target buffer by virtue of the fact the non HDR render target defaults to an ARGB32 target, which can only store values between 0.0 and 1.0 per channel. HDR uses ARGBHalf which is a floating point texture with a range between -65504.0 and +65504.0, so values outside that range get clamped, but that’s obviously less common.