SRP Surface Shaders

The goal of ShaderGraph is to enable building customized shaders that work with Scriptable Render Pipelines, and automatically upgrade to continue working with each version of Unity. It was designed to separate, and black box the internals of the SRP implementations to make it easy to build new compatible shaders.

It is a work in progress, though, and doesn’t yet have the flexibility or power to do everything that the surface shader system in Unity did for the built-in renderer.

But, we are working towards that!

We are working on a public roadmap to gather better feedback from you on the directions and priorities of ShaderGraph; we should have it up by the end of the week.

In the meantime, I have a question for everyone:

What are the things you would build in a surface shader system that are not possible within ShaderGraph as it stands?

6 Likes
7 Likes

Seems like a thread for @jbooth_1 to give input.

2 Likes

Quite frankly a graph will never get there, and forcing us to use a graph for everything to make shaders maintainable is just wrong. You’re not going to force us to abandon C# and force the only text based language to be C++ when the visual scripting system ships, are you? If you want to make sure the graph can do everything, just re-write all of Unity’s hand written shaders with it. If a graph is good enough for us, it’s good enough for you too. Dogfood it.

As an example of why you will never get there, take something like the basemap generation for terrain. This is a really cool feature which can be used to do more than just basemap generation- it allows you to add passes which generate a render texture for use in your terrain shader. These can be anything you want, and use the tags to determine a name, format, and relative size of the render texture. Here’s an example from Unity’s terrain shader in case you’re not familiar:

Pass
        {
            Tags
            {
                "Name" = "_MetallicTex"
                "Format" = "RG16"
                "Size" = "1/4"
            }

            ZTest Always Cull Off ZWrite Off
            Blend One [_DstBlend]

            HLSLPROGRAM

            #define OVERRIDE_SPLAT_SAMPLER_NAME sampler_Mask0
            #include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/TerrainLit/TerrainLit_Splatmap.hlsl"

            float2 Frag(Varyings input) : SV_Target
            {
                TerrainLitSurfaceData surfaceData;
                InitializeTerrainLitSurfaceData(surfaceData);
                TerrainSplatBlend(input.texcoord.zw, input.texcoord.xy, surfaceData);
                return float2(surfaceData.metallic, surfaceData.ao);
            }

            ENDHLSL
        }

Now let’s say I have to use the graph for maintainability and I want to write a terrain shader, which means I’ll need to write a basemap generation shader as well. But wait, this isn’t a surface shader you say, so it doesn’t count, right? Except that if my entire shader is written in a shader graph, I need to call that code from this shader, and the only way to do that is to support all of this in the graph as well. (Or constantly hack out the code I need every time I change the shader graph, which is a nightmare). And currently I can use this to do things Unity doesn’t use it for - like baking out procedural texturing into a splat map, or any other data I want to bake every time the terrain is changed.

This is where text representations just shine. Adding this functionality to the terrain system was likely pretty straight forward- read some tags from the shader, generate some render textures, render the passes to the buffers, set the buffers on the main terrain material, profit. Adding this same functionality to the graph would require a new master node with custom passes and settings, making the addition of custom features like these much more expensive for Unity. So if you really want to push everything through the graph, you need to dogfood it as such and stop writing hand written shaders, and begin the process of supporting all of these edge cases and in effect bring other areas of development to a crawl. Oh and don’t forget I could easily have written this system myself, so the shader graph system will need to support any non-surface shader system as well, since once my code is in the graph I’ll need to be able to call that code from any type of shader I might need.

You don’t want the graph to be everything to everybody- it’s not achievable, and it will just cripple everyone in the long run. It should be focused on what graphs are good for - shaders which are closely tied to the art. And you should be writing an abstraction for hand written shaders which allows them to excel that the things a graph just isn’t good for.


But to answer your question:

Since I write shader generators, I can basically switch anything in a surface shader very easily by generating different code. I guess it’s theoretically possible for you to write a system where I can dynamically generate a graph, but this seems pretty painful compared to just writing the code the graph would generate anyway.

  • Ability to understand the code the graph is going to write; graphs are an abstraction, and every abstraction means hiding information, which means you’re further from the code. This always has cost, and it’s very easy to have a graph hide this without you realizing. So much better information would be required here, like a code output window, feedback from the compiler on cost, etc.

  • Control over the V2F structure and how things move across the stages (this was limited in surface shaders in some cases)

  • Ability to perform work in the vertex function

  • Structs- wiring is just not maintainable through complex systems

  • Macro’s. I avoid these in shaders, but they make some things possible

  • Better handling of sampling- such that no node gets direct access to the sampler/texture but sampling nodes can be somehow chained together. Right now if you use a triplanar node it takes a texture and sampler- but if you want to do POM, they can’t be combined because it needs to texture and a sampler.

  • Ability to have thousands of shader_feature equivalents (requires ability to dynamically emit code, the way my compiler does, and #if #elif around it)

  • Ability to support multiple lighting models within a single shader (I support specular and metallic workflows, along with multiple BRDF"s, and unlit, switching between them with compile time generation setting various pragma’s and defines)

  • Tessellation

  • Pragmas, custom tags, etc…

  • Fallback and other special shader options (basemap shader, basemap shader generation passes, etc)

  • Instancing, including terrain instancing variants

  • Interfacing with Compute shaders

  • Proper branching, handling of derivatives

  • Ability to have custom editor GUIs

  • Access to the TBN matrix before lighting (I do things in a custom lighting function to blend normals)

36 Likes

I really hope that question will be followed by a statement along the lines of “we’ve read the hundreds of posts about this, have heard you all and will be reintroducing the concept of surface shaders ASAP to lessen your pain” rather than “we’ll try to figure out some convoluted and unsatisfactory workarounds which you will see implemented sometime in the next couple of years”.

13 Likes

I think the title of the thread is itself is a bad approach: “SRP Surface Shaders”.

The ability to switch render pipelines in a project (built-in RP included) while keeping the shader and material compatibility.

What I’d really need in Unity is to choose the render pipeline in a project without actually destroying the materials so the selection could be reverted. I need the ability to switch render pipelines between built-in, HDRP and URP in the same project anytime. This is critical to be able to maintain projects that require compatibility with different pipelines, being the Asset Store packages the main example.

In my ideal world Unity has a Surface Material with the available maps and settings for that material. Built-in pipeline would use some maps and settings in some way. HDRP would use them in some other way. URP would use some maps and settings and discard others. Inside this Surface Material it should be possible to create “per-RP overrides” that would have effect when the material is used in a specific RP. Thus, the same material could have a common set of maps and properties together with per-RP overrides. Projects could be switched among the different RPs (included buitl-in) without having to maintain an entire set of separate materials and maps per RP as happens now (a painful hell).

The above must also be available in code, pretty much like current Surface Shaders. Define the common maps and properties, then #define the code and properties to be assigned based on the current RP.

Otherwise, I’m afraid I’d have no choice but staying at built-in RP. Maybe I’d create a HDRP branch from time to time to record some nice video, but the pain of the process doesn’t really worth adopting SRP in the mid-long term.

10 Likes

Yeah, the title of the thread doesn’t really match the line of questioning. The title talks about surface shaders for SRP, but the questions are all about improving the shader graph and implying that you want to get it to replace surface shaders, which it will never be able to do simply because graphs are bad for some things. So what is this thread really about? It kind of screams of “Is there a small hack we can make to shut you all up and get you to use our shader graph for everything?” to which the answer is clearly NO.

12 Likes

We also need to get out the PBR dogma and have control over lights, I can go on a very long rants about what I call “sitcom lighting” (which I already did in another thread). We need pass, and we need proper vertex.

I have seen some answer saying “it’s not pbr compliant” and that’s alarming from an artistic perspective (also AO is not pbr compliant, back in the day it was called the “dirt pass” as it accumulate in corner). Also PBR will end up being more costly when we could have similar visual by simply tweaking the light instead of paying more light or area light (ie smoothing the harsh shadow transition by using a plain lambert) like you would do in a movie light rig, because we directly cheat and we aren’t movie we have more frame per seconds.

Also it future proof any new sort of rendering and NPR rendering, which aren’t simpler version of pbr light, they are their own class.

The shadergraph is just fundamentally flawed in its conception, It’s great for non gaming application like architecture or visualization, which are close to simple photorealistic rendering, or for a movie workflow allowing to onboarding lighting artist without them to learn a foreign technique. But anyone who is much a vfx, technical or graphic designer is limited.

8 Likes

Another thing you should consider is that a programmer is just more efficient at writing code than connecting graphs.

Having a shader api framework / abstraction layer, where ShaderGraph is build on top, is more useful than the other way around.

I understand the need for ShaderGraph, it allows non-programmers to create stunning visuals. However, my experience with node editors is that it works good only for rather simple things.

As soon as a graph reaches a certain complexity, these things are pushed to a programmer again, for example if there is a bug in the graph. As a programmer, I really just roll my eyes when that happens.

Suddenly I’m forced to work in a visual graph to fix someones “code” in a tool that isn’t made for programming.

9 Likes

So in an ideal world, the way I think this should have been designed was to have the SRP own the lighting model completely, not the shader code. Obviously the code would get compiled into the resulting shader, but from a workflow perspective the SRP already specifies most of the lighting model since it specifies what passes are used into what buffers, as well as all the constants for lighting and such.

Each SRP would define a structure, much like the StandardSurfaceOutput struct in a surface shader, with the various inputs to the lightning equation. Preferably you could define as many lighting models as your SRP can support- so Unlit, PBR, SSS, etc. So if you wanted to add NPR rendering to your URP project, you could just create some new definitions for this stuff and it would work without modifying the underlying SRP or the shaders, assuming you don’t need new passes or constants set from the SRP, and every shader which specifies that model just magically starts working with it (this is assuming the inputs match- if they don’t, some modifications would be required).

With this configuration you would provide a template shader for each pass, much like the shader graph internally uses. The shader graph can funnel it’s output through this system, as would a text parser. In the end, to the abstraction layer it really doesn’t know if the data is coming from a graph or text, nor should it care. It also doesn’t really care about what the SRP or lighting model is, it just finds the matching template and inserts functions into the template, which calls the function you’ve defined.

With this, people would be able to ship new lighting models to an SRP, which could be applied to any shader you already own by selecting the new model from the drop down or changing the #pragma and input structure in your shader code. If your lighting model uses the standard inputs, then everything just works- URP->HDRP->URP w/ NPR module installed. If it uses a few extra inputs then they become available, and any inputs removed just get removed from the evaluation. SSS in HDRP can use a heavy solution, while URP uses a simpler model.

9 Likes

Thanks for all the replies!

We are actively investigating what an SRP surface shader system would look like. The question here is to explore what are the things people would want to do with surface shaders, to help us understand what we have to build support for, and the relative priorities there.

A simple system that just let you directly write the SurfaceDescription and VertexDescription functions and describe their input requirements, etc. would be something that could be built fairly quickly (especially if we can do a simple C# API there and sidestep the need for a parsed text file format…). But, it would largely be limited in many of the same ways ShaderGraph is currently limited, and so more work would have to be done to expand the possibilities on top of that.

I agree with you a surface shader would allow many features to be built more quickly, without having to generate the visual UI to control it; but we would still want to make sure it is done in a clean and maintainable way, and have a path to expose as much of that as possible to the graph eventually.

Hmm, this is similar to what we did for VFX support; VFX wants SG to describe the particle appearance, but it needs to feed the properties from its custom built particle state buffers, so there are two code-generation systems that need to play together. We ended up baking down the graph into an “uber-function” representation, that is able to generate the graph function HLSL (and describe the input dependencies) for any subset of the outputs. We are investigating how that system might be generalized and made public so you could grab those graph functions and use them as you see fit.

Interestingly, if we do both this and the surface shader API, we’ll have effectively split the code generation into two parts, and made both halves accessible independently.

Absolutely. I’ve been doing a bit of that recently, and usually just end up converting good portions of complex graphs to custom HLSL function nodes so I can just see the code. I do wish there was a simpler way to pull a custom function node into multiple graphs, having to embed them in sub-graphs is a lot of hoop jumping. TODO…

We’re working on that! It’s part of the Targets and Stacks tasks on the roadmap, to make it possible to take the same ShaderGraph and use it in multiple render pipelines; with the HDRP specific bits getting ignored in URP, for example. Defining pipeline-specific property overrides is a bigger issue though; will make sure that we put that on our list.

6 Likes

I think doing it without a text parsing system would be difficult, and lock you into a small box with what would be possible. There’s a lot more than needs to be accessed than just the SurfaceDescription/VertexDescription functions (and quite frankly I personally find those abstractions a bit annoying to work with). However, that format could be easier to parse or different than surface shaders were. Having it feel like writing a shader was a nice thing though.

That said, I know you guys were playing with a C#->HLSL thing a while back. While I’m sure there would be a lot of issues to solve, I imagine a lot of shader complexity could be managed much better in a high level language like C#. Imagine being able to simply make functions virtual and override them, like the lighting function or more immediately vertex/pixel functions. Lack of macro’s would be a huge loss in C# though…

Yes, but there comes a point of diminishing returns with graphs. For instance, that base map generation example is likely something that like 2 people will ever use for anything more than just basemap generation, so putting in a lot of time there to maintain it’s current flexibility seems like a lot of man hours that could be better spent elsewhere. The SG doesn’t need to solve every shader issue, it needs to solve the ones it’s target audience has.

5 Likes

@ChrisTchou meet @jbooth_1 . Jason is one of our our designated representatives. :slight_smile: We trust his knowledge. Please talk to him in a better environment than this forum thread about all this stuff - if you’re not already doing so. I’d say @larsbertram1 and @tatoforever could be included in that conversation. All top notch devs who know their sh*t when it comes to highly optimised shader development, and who have shown themselves to be dedicated to user satisfaction.

10 Likes

You forgot the amazing bgolus

10 Likes

And a shout out to @Aras who had already explored this stuff for Unity.

4 Likes

And don’t forget the Amplify shader graph creators

2 Likes

The ability to use code. Drag and drop UIs have their limits.

5 Likes

I’m just going to drop in my 2 cents here.

I’m surprised you have switched to using HLSL but are not using interfaces and classes to solve this problem.

//Apologies if I'm shaky on the syntax. It has been a couple years since I last did this.
interface ILightLoop
{
    void doPointLight(PointLight pointLight);
    void doSpotLight(SpotLight spotLight);
    void doDirectionalLight(DirectionalLight directionalLight);
}

class CustomLightingHandler : ILightLoop
{
    float3 ilm;
    float specThreshold;
    float4 accumulatedColor;

    void doPointLight(PointLight pointLight)
    {
        //...
    }

    //... You get the idea
}

//In fragment shader
CustomLightingHandler lightingHandler;
//... Initialize variables

UnityDoLightLoop(lightingHandler);

If you declare the class implementing the interface explicitly, dynamic linkage is not required and the shader should be SM2 compatible. To continue on the example, Unity could also provide base implementations for ILightLoop such as PbrLightLoop or BlinnPhongLightLoop. Then someone could write a shader that runs multiple lightllops and mixes the values together (expensive, but powerful).

The reason Unity’s solution for simplifying shaders has always been codegen is because Unity write shaders using #ifdef and macro spaghetti. If you spend some time designing and documenting a proper API just like any other software development context, I think you will find a lot of problems disappear. And I understand that HLSL does not have a built-in way to denote public and internal functions. You would need to come up with your own scheme for that.

Anyways, just my two cents.

7 Likes

I don’tnt have a lot of experience with Shaders but I just want to say that C# shaders sounds really nice, specially if the new mathematics library helps bridge the gap between C# and HLSL.

This would definitely help cleanup the code and make a lot of things easier to parse and combine! (I’m often still thinking DX9 era, but HLSL has progressed a lot since then).

It doesn’t solve the forward compatibility issue though, which for me is a huge deal- for instance, Unity added several new passes for raytracing to HDRP, as well as new VR rendering code, and in a surface shader world those would have automatically been supported. It also doesn’t help with cross compatibility since URP doesn’t have a light loop, for instance. For that, you still need some kind of system to parse and put code fragments into some kind of template shader. That template, however, could be a lot more modular and easy to adapt, and it could also not be restricted in how its authored. For instance you could imagine having nodes in a graph spit out these functions just as easy as a code fragment, and then a shader written in the graph or a text file could use it. The template itself could be a scriptable object with various code fragments- it doesn’t necessarily have to be an uber template.

I think fundamentally it comes down to wanting to break a shader into several parts with a minimal tie between them:

  • The code which determines where a vertex goes and what the inputs to the lighting equation are
  • The code which lights a pixel
  • The code which makes everything work (VR, passes, etc)

Ideally I never have to touch the code that makes everything work, because at no time do I really ever want a shader that only works in single pass stereo, for instance, or only works on one version of Unity. I just want all of that stuff automatically supported.

I think the code which lights a pixel can (and should) be SRP dependent. The fundamental difference between most SRPs is how you light a pixel, so this is kinda expected. But within reason it would be nice to treat lighting models like plugins which I can mix and match with the other shader code.

The pixel/vertex code is what we want to be always sharable everywhere- with the restriction of the lighting inputs might be a little different, obviously.

3 Likes