Feedback Wanted: High Definition Render Pipeline

Please place all HDRP feedback in this thread. The other SRP thread should be used for core api feedback.

High Definition Render Pipeline overview

The High Definition Render Pipeline (HDRP) is a high-fidelity Scriptable Render Pipeline built by Unity to target modern (Compute Shader compatible) platforms.

The HDRP utilizes Physically-Based Lighting techniques, linear lighting, HDR lighting and a configurable hybrid Tile/Cluster deferred/Forward lighting architecture and gives you the tools you need to create games, technical demos, animations and more to a high graphical standard.

NOTE: Projects made using HDRP are not compatible with the Lightweight Render Pipeline. You must decide which render pipeline your project will use before starting development as HDRP features are not cross-compatible between HDRP and Lightweight.

This section contains the information you need to begin creating applications using HDRP; including information on Lighting, Materials and Shaders, Cameras, debugging and information for advanced users.

HRDP is only supported on the following platforms:

Note: HDRP will only work on the following platforms if the device used supports Compute Shaders. I.e: HDRP will only work on iOS if the iPhone model used supports Compute Shaders.

  • Windows and Windows Store, with DirectX 11 or DirectX 12 and Shader Model 5.0
  • macOS and iOS using Metal graphics
  • Android, Linux and Windows platforms with Vulkan
  • Modern consoles (Sony PS4 and Microsoft Xbox One)

HDRP does not support OpenGL or OpenGL ES devices.

2 Likes

Hi Tim, is Vulkan support supposed to be working currently? I had some issues when I tried it on 2018.2. Basically Unity crashed.
Also is there any info available on what performance difference can be expected between DX11, DX12 and Vulkan?

I have a question regarding Normal Map format on HD pipeline…
Is it still true that Unity works with “OpenGL” normal map format? Or now on HD RP the normal map format was changed to “DirectX”, like in UE4?

is there any particle shader in HDRP? or is it possible to recreate it with HD ShaderGraph?

The ‘proper’, ‘futuristic’ particle solution for HDRP is the new VFX system. But this system is still rather new, so perhaps there are intermediate solutions available that enable the old particle system to work with HDRP, but I’m afraid I lack knowledge about that side of things. I would certainly encourage people to take a look at the VFX system for HDRP, but depending on your skills etc this advice is probably more sensible once VFX is available in preview form via the package manager, rather than its current state of public development on github. And since the VFX is a brand new, graph and compute shader based system, there is no upgrade path for existing particle systems.

Provisional VFX documentation: https://github.com/Unity-Technologies/ScriptableRenderPipeline/wiki/Visual-Effects-Editor

I can’t get fog to work in the HD SRP for a custom unlit emissive shader. In LW it works fine.

In RenderDoc, both GetPositionInput() and EvaluateAtmosphericScattering() are jumped over when singe stepping through the code. I am not sure if this is because that code is ignored or that RenderDoc is unable to step into 3rd party functions. Looking at the assembly, it looks like those functions are indeed missing. Strange because the compiler doesn’t throw an error.

I am not sure where to check in RenderDoc that _AtmosphericScatteringType is correctly initialized.

The blend mode is present as a shader feature.

Below is the full shader I am using. Here is also a small repo:
https://drive.google.com/file/d/1Sn6gIIK6li6fXM05Jo_JOlnHZl2yltXp/view?usp=sharing

The GameObject containing the OmniSimple shader is called CityLightsAmber0 and CityLightsWhite0. Note that the shader needs custom vertex data and cannot be applied to a regular mesh. So the repo is indeed required.

The camera controls in game mode are the same as with the Editor.
You can see a spherical shadow effect and incorrect blending with the cube.

//This shader contains a failed attempt to get fog (atmospheric scattering) to work.
Shader "Lights/OmniSimple"{

    Properties{

        _MainTex ("Light Texture", 2D) = "white" {}
       [HDR]_FrontColor ("Front Color", Color) = (0.5,0.5,0.5,0.5)
        _MinPixelSize ("Minimum screen size", FLOAT) = 5.0
        _Attenuation ("Attenuation", Range(0.01, 1)) = 0.37
        _BrightnessOffset ("Brightness offset", Range(-1, 1)) = 0
    }

   HLSLINCLUDE
   #pragma target 4.5
   #pragma glsl_no_auto_normalization
   #pragma enable_d3d11_debug_symbols

   #pragma shader_feature _SURFACE_TYPE_TRANSPARENT
   #pragma shader_feature _BLENDMODE_ALPHA
   //#pragma shader_feature _ _BLENDMODE_ALPHA _BLENDMODE_ADD _BLENDMODE_PRE_MULTIPLY
   #pragma shader_feature _ENABLE_FOG_ON_TRANSPARENT

   #define UNITY_MATERIAL_UNLIT // Need to be define before including Material.hlsl

   #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
   #include "Packages/com.unity.render-pipelines.high-definition/Runtime/ShaderLibrary/ShaderVariables.hlsl"   
   #include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/Material.hlsl"   //for Fog
   #include "lightFunctions.cginc"
 
   uniform sampler2D _MainTex;       
   float _MinPixelSize;
   half _BrightnessOffset;
   float _Attenuation;
   half4 _FrontColor;

   //These global variables are set from a Unity script.
   float _ScaleFactor;

   struct vertexInput {

       float4 center : POSITION; //Mesh center position is stored in the position channel (vertices in Unity).
       float4 corner : TANGENT; //Mesh corner is stored in the tangent channel (tangent in Unity). The scale is stored in the w component.
       float2 uvs : TEXCOORD0; //Texture coordinates (uv in Unity).       
   };

   struct vertexOutput{

       float4 pos : SV_POSITION;
       float2 uvs : TEXCOORD0;
       half4 color : COLOR;

       //This is not a UV coordinate but it is just used to pass some variables
       //from the vertex shader to the fragment shader: xyz = world space pos. w = gain
       float4 container : TEXCOORD1;
   };           

   vertexOutput vert(vertexInput input){

       vertexOutput output;
       half gain;
       half distanceGain;
       float scale;
       float3 positionWS;

       //Get a vector from the vertex to the camera and cache the result.
       float3 objSpaceViewDir = ObjSpaceViewDir2(input.center);

       //Get the distance between the camera and the light.
       float distance = length(objSpaceViewDir);   

       output.color = _FrontColor;

       //Calculate the scale. If the light size is smaller than one pixel, scale it up
       //so it remains at least one pixel in size.
       scale = ScaleUp(distance, _ScaleFactor, input.corner.w, 1.0f, _MinPixelSize);

       //Get the vertex offset to shift and scale the light.
       float4 offset = GetOffset(scale, input.corner);

       //Place the vertex by moving it away from the center.
       //Rotate the billboard towards the camera.
       positionWS = TransformObjectToWorld(input.center.xyz);
       output.pos.xyz = TransformWorldToView(positionWS) + offset.xyz;
       output.pos = mul(UNITY_MATRIX_P, float4(output.pos.xyz, 1.0f));

       //Far away lights should be less bright. Attenuate with the inverse square law.
       distanceGain = Attenuate(distance, _Attenuation);

       //Merge the distance gain (attenuation), and light brightness into a single gain value.
       gain = (_BrightnessOffset - (1.0h - distanceGain));

       //Send the gain and positionWS to the fragment shader.
       output.container = float4(positionWS, gain);

       //UV mapping.
       output.uvs = input.uvs;

       return output;
   }

   half4 frag(vertexOutput input) : SV_Target{

       //Compute the final color.
       //Note: input.container.x fetches the gain from the vertex shader. No need to calculate this for each fragment.
       half4 col = 2.0h * input.color * tex2D(_MainTex, input.uvs) * (exp(input.container.w * 5.0h));     

       //input.positionSS is SV_Position   (float2 positionSS, float2 invScreenSize, float deviceDepth, float linearDepth, float3 positionWS)
       //PositionInputs posInput = GetPositionInput(input.pos.xy, _ScreenSize.zw, input.pos.z, UNITY_MATRIX_I_VP, UNITY_MATRIX_V);
       PositionInputs posInput = GetPositionInput(input.pos.xy, _ScreenSize.zw, input.pos.z, input.pos.w, input.container.xyz);

       //This does not have any effect.
       col = EvaluateAtmosphericScattering(posInput, col);

       return col;
   }
   ENDHLSL

   SubShader{

       Tags {"RenderType"="Transparent"}

        Pass
        {
            Name ""
            Tags{ "LightMode" = "ForwardOnly" }
           Blend SrcAlpha One
           AlphaTest Greater .01
           ColorMask RGB
           Lighting Off
           ZWrite Off  
            HLSLPROGRAM
                #pragma vertex vert
                #pragma fragment frag
            ENDHLSL
        }
   }     
}

First I want to say I like HDRP, it faster than legacy render, and look better. But volumertic light just destroy performance. Is there any chance that it will be optimized?

1 Like

HDRP v3.3.0
Unity 2018.3b

I have two cameras:

Near camera
Near: 0.1
Far: 1000
Clear Mode: None
Clear Depth :true
Depth: 2

Far camera
Near: 1000
Far: 10000
Clear Mode: Sky
Clear Depth: true
Depth: 1

On both cameras I have posprocessing layer with simple postprocessing effect, which only outputs the linear01 depth from depth buffer of the current camera.

On the preview of each camera I see the correct results.

Near camera:

Far camera

But final composed result is this:

Why did this happened? Did both cameras share one depth buffer on postprocessing stage? Is this bug or known limitation? How I can overcame this issue?

2 Likes

What is the UV the used by Distortion Vector Map? it seems that the base UV mapping setting doesn’t affect it

That seems rather correct to me : the near camera renders after the far one, and clears the depth before rendering, so you’re not keeping the depth of the far one.

So, I can not have depth-based posteffects on far camera?

Also I see the similar problem, mentioned here: HDRP and NGUI

Found the solution via DepthPyramid, thank god.
I also see, that now PostProcessEvent.BeforeTransparent custom effects are not executed by hdrp. Are there any plans to fix this?

hdrp 4 questions

  1. What is micro shadowing?
  2. What is indirect lighting controller?
  3. I enabled SSR everywhere, it’s still not present, what should i do?
  1. directional shadow based on AO and Normal texture, based from Naughty dogs tech used in UC4 Link
  2. additional control for lightmaps and reflection probe data. You can increase/decrease the intensity of the baked data globally
  3. not sure about this one, ssr work fine for me. Edit : Oh try to change the minimum smoothness value in the Volume settings
1 Like

Any way to get GPU Instancing on the HDRP Unlit Shader Graph shaders? Only seeing it for HD Lit but not for Unlit in 4.0.0.

MSVO has severe problems with large view distances, so we will need a range value for it. We tested with a 3km view distance with short 0.05f near clip. Geometry was a huge unity sphere of 200x200x200

MSVO gave us harsh banding and low resolution artefacts which would be entirely avoidable by being able to fade out the effect over distance. Has Unity not tested HDRP fully with open world rendering?

Please consider the addition of a falloff/range parameter or advise, thank you, since MSVO for us is fine otherwise.

Or perhaps it’s just a bug.

HDRP has an issue with materials set to gpu instancing in Editor mode. It seems to flicker horribly between some kind of baked atlas and it’s actual texture and applies to static stuff. This has been going on forever so I am assuming it’s just WIP.

The problem goes away in play mode but does prevent proper level editing, therefore I have rolled a script to set the sharedMaterial properties for static stuff to not be instanced.

So why not do that all the time? Well, some things will share materials :stuck_out_tongue:

I need that answer as well…

A question about Volumetric Fog…
I am struggling to achieve a good resolution using a SpotLight, as you can see in the image. Even with “increase resolution of volumetrics” enabled. Anyone can help me?