I wrote Surface Shaders 2.0 so you don't have to deal with SRPs anymore

So I wrote a surface shader system, which is currently compiling shaders into Standard and URP, and HDRP should be working soon. I think it’s much cleaner than the old surface shaders system, which has a lot of kookiness around screenPos, viewDir, normals, etc. And it’s also modular, allowing you do something like #include’s, but where they can bring properties, cbuffer entries, and other stuff along with it.

As an example, here’s a basic shader with tessellation support:

BEGIN_OPTIONS
   Tessellation "distance"
END_OPTIONS

BEGIN_PROPERTIES
   _Albedo ("Albedo", 2D) = "white" {}
   _Normal ("Normal", 2D) = "bump" {}
   _Height ("Height Map", 2D) = "black" {}
   _DisplacementAmount("Displacement Amount", Range(0,2)) = 0.5
   _DisplacementMipBias("Displacement Mip Bias", Range(0,6)) = 2
   _TessSubdiv("Tessellation Subdivisions", Range(2, 24)) = 8
   _TessMinDistance("Tessellation Min Distance", Float) = 0
   _TessMaxDistance("Tessellation Max Distance", Float) = 35
END_PROPERTIES

BEGIN_CBUFFER
   float _DisplacementAmount;
   float _DisplacementMipBias;
   float _TessSubdiv;
   float _TessMinDistance;
   float _TessMaxDistance;
END_CBUFFER

BEGIN_CODE

   sampler2D _Albedo;
   sampler2D _Normal;
   sampler2D _Height;

   // (optional)modify the vertex post tessellation
   void DisplaceVertex(inout VertexData v)
   {
      v.vertex.xyz = v.vertex.xyz + v.normal * tex2Dlod(_Height, float4(v.texcoord0.xy, 0, _DisplacementMipBias)).g * _DisplacementAmount;
   }

   // (optional) if you are using tessellation and displacement, you can return
   // the tessellation distance and subdivision here
   float3 GetTessDistanceFactors ()
   {
      return float3(_TessMinDistance, _TessMaxDistance, _TessSubdiv);
   }

   void SurfaceFunction(inout LightingInputs o, ShaderData d)
   {
      half4 c = tex2D(_Albedo, d.texcoord0.xy);
      o.Albedo = c.rgb;
      o.Alpha = c.a;
      o.Normal = UnpackNormal(tex2D(_Normal, d.texcoord0.xy));
   }

END_CODE

This will compile to all three render pipelines, and acts just like any other shader in your system. Note that you don’t write the v2f and other traditional structures- rather the system uses a naming convention and constructs them for you based on if you access that data. So, for instance, if you read d.TangentSpaceViewDir, then it will be provided for you.

Here is an example shader documenting the current data and options that are available (this will grow):

BEGIN_OPTIONS
   // ShaderName "Path/ShaderName"  // The default will just use the filename, but if you want to path/name your shader
   // Tessellation "Distance"       // automatic tessellation, distance, edge, phong
   // Alpha "Blend"                 // use alpha blending?
   // Fallback "Diffuse"            // fallback shader
   // CustomEditor "MyCustomEditor" // Custom Editor
   // RenderType "Opaque"           // render type
   // Queue "Geometry+100"          // forward rendering order
   // Workflow "Metallic"           // Specular or Metallic workflow, metallic is default
END_OPTIONS

// Put any properties you have between the begin/end property blocks
BEGIN_PROPERTIES
    _Color ("Main Color", Color) = (0, 1, 0, 1)
END_PROPERTIES

// Any variables you want to have in the per material CBuffer go here.
BEGIN_CBUFFER
    half4 _Color;
END_CBUFFER

// if you are writing a subshader, any defines that should be set on the main
// shader are defined here
BEGIN_DEFINES

END_DEFINES

// All code goes here
BEGIN_CODE

    // (optional) if you want to modify any vertex data before it's processed,
    //    put it in the ModifyVertex function. The struct is:
    //    struct VertexData
   // {
   //    float4 vertex      : POSITION;
   //    float3 normal      : NORMAL;
   //    float4 tangent     : TANGENT;
   //    float4 texcoord0    : TEXCOORD0;
   //    float4 texcoord1   : TEXCOORD1;
   //    float4 texcoord2   : TEXCOORD2;
   //    float4 texcoord3   : TEXCOORD3;
   //    float4 vertexColor : COLOR;
   // };

   // (optional)modify the vertex
    void ModifyVertex(inout VertexData v)
    {
    }

   // (optional)modify the vertex post tessellation
   void DisplaceVertex(inout VertexData v)
   {
   }

   // (optional) if you are using automatic tessellation and displacement, you can return
   // the tessellation distance and subdivision here
   float3 GetTessDistanceFactors ()
   {
      float minDistance = 0;
      float maxDistance = 35;
      float subDiv = 12;
      return float3(minDistance, maxDistance, subDiv);
   }


    // (required) Write your surface function, filling out the inputs to the
   // lighting equation. LightingInputs contains:

    // struct LightingInputs
   // {
   //    half3 Albedo;
   //    half3 Normal;
   //    half Smoothness;
   //    half Metallic;     // only used in metallic workflow
   //    half3 Specular; // only used in specular workflow
   //    half Occlusion;
   //    half3 Emission;
   //    half Alpha;
   // };

   // The SurfaceData function contains common data you might want, precomputed
   // for you. Note the system strips unused elements from the structures automatically,
   // so there is no cost to unused stuff.

   // struct ShaderData
   // {
   //    float3 LocalSpacePosition;
   //    float3 LocalSpaceNormal;
   //    float3 LocalSpaceTangent;
   //    float3 WorldSpacePosition;
   //    float3 WorldSpaceNormal;
   //    float3 WorldSpaceTangent;
   //    float3 WorldSpaceViewDir;
   //    float3 TangentSpaceViewDir;
   //    float4 texcoord0;
   //    float4 texcoord1;
   //    float4 texcoord2;
   //    float4 texcoord3;
   //    float2 screenUV;
   //    float4 screenPos;
   //    float3x3 TBNMatrix;
   // };


    void SurfaceFunction(inout LightingInputs o, ShaderData d)
    {
        o.Albedo = _Color.rgb;
        o.Alpha = _Color.a;
    }

END_CODE

Note that even though I have automatic ways to do things like tessellation, it’s still possible to write your own versions of those functions if you want to do that, or to add geometry shaders, compute buffer data, etc. There’s significantly less restraints and assumptions than the old surface shader system had, and the naming conventions are clear about things like “what space is this in?”, and there’s no funky “This will be in this space on Tuesdays, but on Wednesdays it will return NaN, and on Friday it will be whatever the value in o.pos is”, and you don’t have to do funky stuff to get at the TBN matrix, it’s just there, where you can access it. Crazy, right?

Shaders are also modular. One shader can include another, and it will bring in it’s properties, cbuffer entries, etc. Right now this is only handled via code, but it would be possible to provide this via a scriptable object, such that users could add “Snow” to an existing shader. Obviously there are limits to how far you can push this, but adding weather effects to existing shaders is a perfect example that has been an issue for a lot of games in the past.

So main benefits:

  • Simple, constant way to write shaders
  • Write once, run on standard, URP, or HDRP
  • Shaders automatically upgrade between SRP versions (*assuming I have done the support for them)
  • Can write features as separate shaders, then plug them together.

So, all that said, it would be interesting to know some of the following, assuming this is something that interests you:

  • Which SRP do you use, if any?
  • What use cases do you write custom or surface shaders for?
  • What features did you find limiting in Surface Shaders that you would want to have control over?
  • Which unique shading features do you use in your SRP (ie: Bent normals, SSS, etc), and how would you like to handle fallbacks in other pipelines (ie: Approximation of SSS in URP, don’t bother, etc).

I also have a lot of different thoughts about how to sell this. I will most likely move MicroSplat over to using this system instead of it’s current render adapters and provide an upgrade there, as maintenance of systems like these is a huge potential cost. Last year, supporting SRPs was about half of my development time, so having only one system to abstract these issues makes a ton of sense.

Anyway, thoughts welcome.

95 Likes

Holy crap, nicely done!

2 Likes

This is 100% needed, and personally I’d have loved to see a system like this used as the base foundation for shadergraph.

8 Likes

Is this something different than the prototype we created a few months ago?
https://github.com/pschraut/UnityModularShaderPrototype

2 Likes

Design is based off that, yes - but actually compiles to multiple pipelines, supports transparency, spec/metallic workflows, shader stripping, tessellation, proper line number error reporting, etc…

12 Likes

Now this is epic

Brilliant – this is what Unity’s own team should have done from the start, which would have saved the rest of us vast amounts of time.

Have you considered any integration of this with Amplify Shader Editor, possibly as a custom template?

6 Likes

This looks great, love it! And yeah, ticks all the boxes of what I think “surface shaders 2.0” should have been.

The original surface shaders system was done for Unity 3.0 pretty much a decade ago, and one of the causes of it’s wonkiness was that it tried real hard to save on instructions & interpolators. DX9 shader model 2.0 with max. 64 instructions and 8 interpolators was still a big thing back then, and since all the previously hand-written Unity built-in shaders were going to get changed to surface shaders, the system spent a whole lot of effort in making sure there’s no regressions in terms of generated code, instructions or interpolators, compared to hand-written shaders. This did lead to things like “oh, a shader does not actually sample the normal map in this variant? this means tangent space does not need to be passed, saving us a couple multiplies”. That was (I think) a good call for DX9 SM2.0 era, but now a decade later is just mostly pointless confusion & complexity.

My own initial prototypes for surface shaders looked really similar to your example above (see blog post), which is curious since there’s 11 years of time between them :slight_smile: The final result that ended up in Unity was different since once you want to have multiple sub-shaders, fallbacks, more features, etc. etc. it (to some extent) stops being this nice thing and becomes a bit of a mess. A whole lot of that is much less relevant today though, so fingers crossed your system does not have to become a mess.

And yes, doing all the codegen & logic in C# with a custom importer makes much more sense.

24 Likes

Awesome Jason, eager to try this out!

Frustration and anger are very powerful forces to push the state of the art :wink:

Not sure how that fits with monetization, but I think keeping this working across versions, adding arcane features etc would probably be easier if the source was accessible somewhere - I’m sure many people (including me) would love to push this forward.

4 Likes

SRP: Them all. :cry:
SRP Unique Features: SSS, Iridescence, Bent Normals.
Fallbacks: Please emulate if possible, drop if not.

Yeah, it’s pretty obvious looking at the output code how much work was done to save even one interpolator or computation, none of which matters much anymore. But it likely helped Unity gain dominance on early mobile, which is a large part of the companies success story, so it might be baggage now, but it had it’s use. URP is largely a cleaned up version of the old pipeline, and once you strip all the bloat the shader graph outputs about packing and unpacking structures, it’s quite workable - but I will never understand why it does all that struct conversion, simply standardizing the structs across the pipelines and stages would do a ton to making the code more readable. Right now, it’s like 9k lines, 70% boilerplate, and you’re like “Where is the actual frag function?” (*it’s deep in includes).

I will forever be bitter about whomever made macro’s assume that the position is called pos in the standard shader. I’m forced to keep that convention internally (well, or I unroll a ton of macro’s spread across multiple files to avoid it).

The begin/end block style makes it super easy to parse - and while I prefer the surface shader syntax (looks like code), I’m trying to make this as easy on myself as possible, so I can focus on the larger issues; like what to do about features which only exist in HDRP and how to gracefully fall back, how do I monetize this thing so I’m not trapped in porting hell for minimum wage, etc. But the need is there, and I’ve been hacking together adapters for MicroSplat long enough that doing a more formalized system might make sense even if no one else uses it, and 5 years of raising the issue with Unity hasn’t gotten anywhere - so I’m going for it and we’ll see what happens. I think, if anything, I’ll say no to more stuff than surface shaders did, and be able to ride of the benefits of having pushed that system to it’s breaking point, as well as not needing to handle super low end stuff anymore.

I mean the fact that I can do all of this via the game engine’s scripting language is kind of what makes Unity great. A lot of new Unity has been moving away from this kind of scriptability, closing off APIs, etc - but to me that is what makes the engine cool.

16 Likes

Wow, that’s sounds incredible so far! I wonder what the catch is, if you don’t mind me asking. I assume you’re only planning to support one version of HDRP and URP, e.g. version shipping with current LTS?

To answer your questions:

I’m trying HDRP on a personal project (going to try HDRP again with release 10, primarily interested in archviz-like scenes coupled with VFX experiments, mostly because that’s what built-in HDRP features and Shader Graph seem to handle well) and built-in deferred pipeline on our main project.

We’re mostly using surface shaders. We don’t need to do anything custom with lighting/shadows (although it’s nice to be able to extend the output structure and modify deferred to use it, like AFS does with translucency/wrap lighting output it adds). Lots of the shaders in the project for things like visual effects are something you can potentially build with Shader Graph, but there are couple of things I’m not sure it’d support. We rely on support for procedural instancing (we submit batches using Graphics.DrawMeshInstancedIndirect and our shaders read StructuredBuffers to get instance data), can’t render our levels without that. We also needed to use custom shaders to use Star Citizen style mesh decals blending into individual deferred buffers. I vaguely recall an old version of that using some of the special “final” functions surface shaders has, but current one is fully custom so we probably hit some limitation with that (can’t quite recall right now, I can look into it if it’d be useful).

A bit less boilerplate, like your examples show, would already be a huge improvement. My personal most hated quirk is properties like worldPos giving you different things under different circumstances and maybe how vertex shader in surface shaders uses different space vs. custom shaders. It makes something technically trivial like a surface billboard shader more pain to implement, as you have to revert whatever was done to the value you hoped to get, then transform your output back into space surface system expected. Can’t quite recall much else to complain about off the top of my head, I’m actually pretty happy with surface shaders relative to many other parts of Unity, the biggest pain is them not existing in any form on new pipelines. :slight_smile:

Full support for all useful forms of instancing (and maybe making it possible to do SSS and deferred mesh decals that can blend separately on separate targets, e.g. just normals) are the most important things for me. Not much else comes to mind just yet. Wrt emulation, I don’t think I’ll need emulation of URP/HDRP features on built-in, if that’s what you were referring to.

No matter how you’ll decide to release it, I’m very much looking forward to hearing more. Keep up the great work, I think Unity community is lucky to have you!

2 Likes

So yeah, the money part. It’s what stopped me from doing this a while back, because I didn’t really want to be the janitor writing these templates and chasing down changes every time they are made, but also because how do you, exactly, get paid for that work? I think there are several users of this system:

  1. Someone who just wants an approachable way to write a shader, without being hampered by a graph. Maybe they are doing something with compute and need to feed it procedural data, and having to unroll a 10,000 line HDRP shader and modify it is not exactly fun.
  2. People publishing to the Asset Store, like myself, who have to support multiple pipelines.
  3. Larger studios who want one way to write things for multiple projects, which could be using any renderer.

I’d like to support all of these cases, but they each provide different challenges. For 1 and 3, you sell an asset on the store that does what it does, and they are happy. However, the market for this is likely not massive- and since most larger studios buy one license and illegally share it, you’re not getting paid what your supposed to be paid for those customers, even though it’s easily worth a lot of money for those studios.

For #1:

  • They need a reasonable priced solution to solve their personal problems, making shaders easy to develop. They are likely not super interesting in multiple pipelines.

For #2:

Here there could be options:

  • Adapters sold separately (much like my URP/HDRP adapters for MicroSplat). Maybe the standard pipeline is free, and people buy the URP/HDRP adapters. I get money from selling more adapters, they save tons of development effort by not having to develop everything three times. This is obviously more complex, but at some scale if enough developers sign on, then users wanting those products for HDRP/URP pick up the adapters and get support in a whole bunch of products. But this needs scale to work - if developers don’t sign on, then the only adapters which sell are because of my products (MicroSplat, etc) anyway. And though I have not really had any backlash about selling SRP compatibility, a lot of UAS dev’s want to embrace race for the bottom tactics, thinking that pricing low and killing themselves supporting all these different pipelines is going to get them somewhere (it’s not).
  • Enterprise licenses to UAS developers. Real money per year, you get to ship a DLL that compiles this stuff with your product. To the user, everything just works, and you save development time. But this is a small market of mostly shader heavy assets.
  • Just sell the editor, and make UAS developers export different shaders for each platform/version/etc, like they have to do now. This kind of blow’s because the user experience is still pink material’s when they install, until they unpack the “HDRP shaders for 2019.4” package, and asset store authors are shipping non-modifiable source and multiple copies of everything. On the plus side, it’s much easier on my end - I add an export button and I’m done. And if asset store authors ship their source and users buy my system, they can edit them from the source. (This is basically the Amplify model).

For #3:

  • Sell on the store, and get paid a fraction of what you are supposed to be paid.
  • Sell only through enterprise licenses. Get paid properly, and provide top notch support

Trying to balance all this is hard. And a lot of these lead to source being in a DLL, which I’m personally not a huge fan of. I’d much prefer to open source it and have the community chip in, but a) Unity’s community is not historically good with that, so it would mostly fall onto me and b) I’m not trying to go hungry atoning for Unity’s sins.

9 Likes

Most likely I’d just support LTS versions, and if they work on non-LTS versions, great. Now if this became a major revenue stream, I might increase that, but not for what MicroSplats HDRP/URP adapters make now (which is actually decent money, just not enough to rewrite them every other month).

That would be useful to know more about. I have not added a final color style system yet, but that would be very easy to add. The Code block is just kinda dropped into each pass, and I plan to have defines so you can easily write code for specific passes or SRPs if you need to do that kind of stuff. Thinks like structured buffers and such would just work.

Right now the ModifyVertex function is called before any transforms are done to the vertex, so it’s entirely local space. However, it would be trivial to break that into a function that is called on the vertex, and one that is called on the v2f structure after it’s transforms are done, so if you wanted to modify the vertex in clip space, or even get into funky low level stuff like modifying the lightmap UVs in URP only, you could. But I haven’t exposed that yet because the V2F structure is different depending on pass and SRP, and that seems prone to error in a lot of cases.

Well, for instance, there are cheap approximations for SSS that would be fine for URP/Built In, so I could add those and have some kind of fallback setting so you can emulate it or not. Bent Normals I’d likely just drop. It’s kind of case by case, because HDRP has a lot more shading features, and some of them aren’t easily emulated. Instancing should just work.

I just wanted to say that I absolutely love an approach like this! Can’t wait to try it out. Letting everything be modifiable/having less restraints is a big plus too.

I have been working with the old Surface shaders for a good bit recently because I’m not yet willing to port my bigger project to one of the new render pipelines (will probably be HDRP if so) - but even then I would likely go with Amplify Shader instead of Shader Graph (from the tools that are available to me so far).

What I wished for in the Standard Render Pipeline was a way to have master materials/material templates - after I saw how Unreal handles this, which is very designer friendly). A good comparison would be Prefabs to Prefab variants, where a material variant would inherit all properties and property references of the parent unless explicitly marked for overwriting. That is most likely outside the functionality for a tool supposed to compile to shaders in multiple render pipelines, but it’s still a thing that bothered me with the Standard Pipeline workflow.

1 Like

Just to chip in here with some piece of advice:
Have you checked how the whole Shapes approach via Patreon “pay what you think it’s worth” worked for Freya Holmér? It seemed like she made some quite ok returns by open-sourcing a “demo” and having people pay via Patreon.
I don’t know exact numbers, but it seemed to be a surprisingly viable approach for her. Maybe worth reaching out and getting some feedback from her about that?

1 Like

I bought that on 50% sale. I was surprised by the results of the “how much should I charge for Shapes?”-Poll on twitter.
I think compared to ShaderForge, Shapes is really expensive (when not on sale) but Freya absolutely deserves the success nontheless.

All of that sounds absolutely great!

If I recall correctly, the specific surface shader feature we originally used was finalgbuffer function which let you blend contribution to each render target separately by modifying alpha of output color. It didn’t allow you to do anything fancy like proper normal blending since you can’t read what’s already in the target, but it was better than nothing. After that we also wanted to blend smoothness nicely, and that started complicating things because you had to make it a two-pass shader for an imperfect approximation (due to smoothness being packed into alpha of one of the targets and therefore due to needing to write a backing color into it). A current decal shader we use is just a fork of Standard shader (not a surface shader) that does a Blend Zero SrcColor pass first (to output something like

outGBuffer0 = half4(1 - alphaAlbedo, 1 - alphaAlbedo, 1 - alphaAlbedo, 1 - alphaOcclusion); and then does a Blend One One pass (to apply actual contributions multiplied by per-output alpha). Hopefully that helps!

I think HDRP had some support for this decal shader style built in, last I tried it (when FPS Sample was first released), but I haven’t had a chance to try that in latest HDRP yet. Haven’t heard of URP having any mesh decal support. It’s a bit hard to imagine how to make a surface 2.0 shader like that work in all pipelines, especially when URP is added to the equation, since the whole idea is deferred specific. I’m curious what your opinion on handling deferred specific cases like per gbuffer outputs - is that something your system should even try to cover?

I’m also curious if you could go into more detail about the outputs you’re planning to support. Are you planning to focus only on a case of PBR surface materials (akin to Lit shader in new pipelines), which is what the original surface shader system primarily cared about? Most of the surface shaders you see only use SurfaceOutputStandard so that could be reasonable. Or are you planning to attempt replicating something akin to Shader Graph Master Nodes (Lit + Unlit + Hair + Decal + Fabric etc.) across all pipelines? A lot of shaders make interesting decisions not in the fiddly details of using final results like albedo value, but with how they arrive at a given value at a given point, so I can see a ton of value in getting multiple output types out of the box. But it’s probably a much bigger support burden to try covering all of those output types when only basic lit surface is covered equally by all 3 pipelines.

To answer your questions:

- Which SRP do you use, if any?
I’m using HDRP for a PC Game.

- What use cases do you write custom or surface shaders for?
A custom Grass Shader with ComputeBuffers to be used with DrawMeshInstancedIndirect. I used DrawMeshInstanced before with MaterialPropertyBlock-Arrays, but the indirect method is even better for me. I tried to redo it in ShaderGraph, which worked with MaterialPropertyBlocks on single meshes, but not per-mesh. The “access instanced props” is not supported in ShaderGraph, as far as I understood. I need that for grass interaction like bending, cutting etc.

- What features did you find limiting in Surface Shaders that you would want to have control over?
in classic Surface Shaders (“1.0”) I always wanted to do more, like manipulating the vertices which required me to write vertex/frag shaders from early on - I rarely touched surface shaders because they felt simple but limited. If I want a simple shader today, I’d use shadergraph.

- Which unique shading features do you use in your SRP (ie: Bent normals, SSS, etc), and how would you like to handle fallbacks in other pipelines (ie: Approximation of SSS in URP, don’t bother, etc).
Do not really bother for my current project.