Focusing on Shaders

We know that the introduction of the Scriptable Render Pipelines brought a lot of complexity to writing shaders in Unity. We are working on improving that experience. As you are likely aware, the features we write today take almost a year to release in a shipped Unity Editor, so please keep that broad timeline in mind. We will share frequent updates on these plans and give you access to our work in alphas and betas for early feedback whenever possible.

Requirements
We’ve read these threads, your tweets, and your blog comments, and we’ve talked with some of you directly. We know the following features are critical:

  • You need to hand-write shaders and have that code work across pipelines and upgrade seamlessly to new versions of Unity.
  • You need a straightforward way to do that without a ton of boilerplate code.
  • You need these shaders, and ones created in Shader Graph, to be modular, composable, and work with custom Render Pipelines.
  • You need to create your own templates/targets for any render pipeline and let TA’s and artists extend those in Shader Graph or via code.

The following system is designed to meet those requirements.

Design
We are adding the concept of composable Shader Blocks to our existing Shader Lab language. A Shader Block is an isolated piece of shader functionality that declares a public interface in the form of inputs and outputs.

Render pipelines will provide Shader Templates, each exposing a set of Customization Points (name TBD). Shader Blocks can be plugged into these Customisation Points, and their inputs/outputs will be automatically matched together to pass data between them. You will also be able to manually control this data flow. A completed template outputs a subshader. The final shader is then composed from one or more subshaders.

Underlying this system will be an internal, intermediate representation of the Shaders that we’re calling the Shader Foundry. All Shader Block code, as well as Shader Graph files, will produce this unified representation, allowing both to use the same set of templates to output Shaders for each required pipeline.

Of course this is all very high-level, but over the next few months we’ll show you examples of the code and how to use it so we can refine our APIs based on your input.

Benefits
The biggest benefit of this system is that it supports every feature of the render pipelines. No more waiting for Shader Graph to add a node or workflow - URP and HDRP will expose their features directly. It also provides stability across upgrades so the shader code you write in it will work for years. Additional benefits:

  • Customizability - the provided customization points will give a lot of power to control the final result, and if they are insufficient you can create your own template either from the ground up or based on an existing template to limit or expand the end user’s ability to control the shader.

  • Reusability - Shader Blocks are interchangeable, mix-and-match pieces. Unity’s internal functionality will be written as blocks that you can incorporate into your own shaders. The Asset Store will use the same system as well.

  • Specialization - We will include the concept of pipeline specific blocks and branches so functionality can be adjusted based on which pipeline is in use, along with other performance, platform, and feature support branches.

  • The limitations of the system are driven by the render pipelines. For example, HDRP will not support custom lighting due to the complexity of its lighting system, while URP will support providing your own lighting function because URP’s lighting system is not as deeply integrated with the other parts of the rendering pipeline and is designed to be highly flexible. As part of our goal to provide a unified authoring environment across pipelines, we are finding ways to reference common concepts (like motion vectors) in a pipeline-agnostic way.

Surface Shaders?
A big question on many minds is, “Will this provide a feature-for-feature replacement for the BuiltIn Render Pipeline’s Surface Shaders?”. Our goal is to provide as much parity and support as possible within the constraints of a modern, performant graphics pipeline. Some of the features and syntax that Unity added to Surface Shaders are kind of bonkers! And you all did amazing, beautiful things with them! But that kind of deep access to every facet of the renderer’s internals just isn’t realistic in a modern rendering architecture. We’ll provide a lot more detail on this in the future, including a feature-by-feature comparison. For the Surface Shader features we can’t support, we’ll gather your input on what you used them for and do our best to provide alternative solutions.

Timeline
So when can you get your hands on this? It won’t all come right away. Over the next year, we will release the Shader Foundry layer - including a public, supported C# API for constructing shader code in the Foundry. We will also update our Shader Graph Targets to take advantage of the new system. Along with that, we will start releasing previews of the new ShaderLab features that will become the preferred way to write shaders for Unity. If you prefer, you will still be able to write directly in the low level languages you use today. Since this all has a lot of nuance and we need to get it right, we’ll be reviewing the work in progress with a lot of people both internally and externally before we finalize and release it.

Summary
This is a high-level plan, and we understand if you have questions or are just skeptical! Our best answer will be to execute and deliver a fantastic solution. Please let us know where you’d like additional clarity, or if there are use cases we haven’t captured here. We welcome your feedback and we look forward to the incredible experiences you’ll build with it!

31 Likes

Sounds great! I’m very interested to see the details and to port some things to the system once it’s available. Unfortunately, given the timeline, that leaves maybe another 5 years of supporting the existing mess, but seems like eventually we will have nice things.

I’m glad you’re copying the stackable concept from Better Shaders, it will pay off big time, as being able to write and compose shaders from separate authors, written in text or graph, is kind of a holy grail for studios and asset store users. I’m still unclear if allowing multi-stacking (multiple copies of the same subshader added to one shader) was the right choice in Better Shaders, since it has to mutate the names of properties/keywords/variables to work, but it is a powerful feature if you choose to allow it. (Note, wasn’t clear if you have some kind of blackboard for data sharing between blocks, but that is needed as well. Might even want the equivalent of [RequireComponent()] for shaders?)

I’m also very interested in how you plan to handle optimizations and some of the more esoteric cases. For instance, optimizing out interpolator usage in shader variant scenarios, or handling things like centroid interpolation on texcoords, etc. For instance, in Better Shaders I don’t have the user write these structures by hand, which greatly reduces the complexity of writing shaders and keeps things standard and well named (which I suspect you’ll do as well given the shader graph/text approach here). Mostly, it just kinda works and strips what you don’t use. But there are times when in one variant you might not use some data and want to optimize it out, but making the parser detect and do this automatically would be a lot of work. So I ended up going for an “opt in” approach with these kinds of optimizations, where you can add an #if _FOO check around something like texcoord3 in your appdata structure via an option. Same for something like using centroid interpolation.

I do think what you expose in a graph and what you expose in text can be different. For instance, adding compute support to a text based shader is a matter of adding some code to the shader (not the template), but adding support to a shader graph is not trivial, requiring whole new constructs and nodes. So in a way, I think the new approach, which allows for both within the same shader, can actually relax the requirements on the shader graph somewhat. Programmers can write a block which reads the compute data and provides the data to the blackboard, and artists can just plug it in and graph away. In fact, many things which the shader graph currently doesn’t support could be easily handled this way (terrain shaders, etc).

Question:

  • Will this target BiRP as well as SRPs? If not, we will still have to write for multiple pipelines until it either does or BiRP is removed.
21 Likes

Sounds very interesting and I look forward to hearing more!

This bit did concern me a little. Do you have any idea what limitations will be imposed in the future? I may be misunderstanding, but this feels at odds with the goals of the scriptable render pipelines (exposing more control and low level access to the user to enable ‘non-standard’ things to be done)

5 Likes

+1

Is there a Rough ETA? Even when SpaceX designs rockets, they have a rough ETA :slight_smile:

1 Like

Unity is not rocket science.:slight_smile:

5 Likes

These are very welcome news! Looking forward to trying the system out when it becomes available - lack of simple, maintainable shader programming solution similar to surface shaders has been the single biggest issue I had with URP/HRDP.

Jason! Thanks for your great feedback. You get it, and the points you’ve hilighted (like name mangling) are exactly what we’re working out and debating internally. Seeing how people use Better Shaders has given us invaluable insight into the opportunities and pitfalls awaiting us! And you’re exactly right - this will free Shader Graph to focus on providing fabulous artist workflows without the expectation of exposing every nook and cranny of the domain.

Regarding optimizations, that’s one of the major strengths of the Foundry concept. Instead of splicing strings around, we’ll have a full data representations of the shader blocks that we can reason about along multiple axes - platform support, quality levels, etc. I think your “opt-in” approach is a good one for when automatic optimization fails. Once we’re a little further along, I’m sure we’ll have some great conversations about how best to Make Go Fast.

As for BuiltIn Render Pipeline support, that’s the hidden driver behind the BiRP Target we’re releasing for Shader Graph in Unity 2021.2. We’re already testing this new work using that target so it can be a bridge for our customers to reduce the effort required to upgrade to SRP when they’re ready.

ElliotB, we’re not imposing any limitations on what SRP already supports. As I said, we’ll have the Surface Shader feature comparison out as soon as we can.

Hippocoder and Valarus, we’re not rocket scientists. :smile: As I said, “Over the next year…we’ll start releasing previews.” As soon as we’re confident in a more specific timeline, we’ll share.

11 Likes

Sounds great, thank you! I’ve really enjoyed the new possibilities from SRP so I’m excited to see what comes next.

@jbooth_1 Thank you for the feedback!

Blocks produce and consume data using inputs and outputs. Passing data from one block to another is as simple as linking an output of a producer to an input of a consumer. We’ll also make some rules for passing data around implicitly so that one doesn’t have to type in everything.

Definitely!

3 Likes

Great so, if I have a block that only works on vertex and that’s really optimised, the result will get passed to fragement just like the old surface shaders? This way I can calculate some pretty expensive things at a lower granularity. Also, what about interpolation for that? Assuming it’ll just work if I packed a spare “texcoord”. I think a lot of that should be cleared up.

My current way of working (so I remain sane) is to just go directly to a function node, work in my HLSL and exit to the final output (base, emission, etc).

One of my needs was to be able to do something to the colour after all lighting has been calculated by Unity, would that be possible too? It’s not the maximum priority but it’d be nice to do dither myself, or other stuff at the end like fast object based grading/tone mapping, and I can’t currently without losing hair or using an unlit shader… not ideal.

I tend to develop for VR and low power devices but have a huge bag of tricks that sadly died when URP rolled around due to lack of flexibility and access.

How much control do we have?

1 Like

Yep. We’ll provide a mechanism to pass data from a block in a stage to a different block in a following stage. You’ll also be able to control, how it’s interpolated.
Do you think explicit control over what gets packed with what will be needed? We had an automagic system for interpolator packing in mind.

This will be possible once the render pipeline you work on provides a customisation point in their templates. I suppose you’re talking about the lack of finalColor from surface shaders in the SRP land, is that right?

2 Likes

Yeah I just want to do grading and the likes in “mesh” shaders as I can’t really afford to do it as a post process. I’m actually not using any post process at all (it’s all local to the object shaders).

Turns out in 2021, it’s still faster to do old tricks (for VR and switch at least).

2 Likes

Automatic is good default especially if we will see how data packed actually
But sometimes we can pack data in specific way because some additional data can be calculated from that format or something. So I guess we will ask for manual way eventually.

Dunno what @hippocoder want to say but I fall many time into situation where I want to replace some block on project level, not specific shader level, but pre/postprocess any input of master node or replace SampleTexture2D node for all shaders (or some selection) in project, etc.

Main Thought is we want to be able pre/post process all shaders(or internal pieces) in project (from store or any other source) to meet specific project art style.
For Example: Unreal has concept of Base Material and to apply virtual texturing we need replace texture sampler in that base material to virtual Texture sampler

  • When adding a virtual texture to a Material Graph, UE assigns the Virtual sampler type automatically. However, should you make the expression a Texture Sample Parameter that you can use in Material Instances, keep in mind that the base material applies the Virtual sampler type to all child instances.

Do it be possible to do something like this?

Do it be possible to create mesh with very custom format and custom data of custom data types :slight_smile: and feed it though our special custom shader block to URP?
I mean we have special packed vertex format and unpack it so other part of vertex shader can use it as normal. This additionally will require mesh previews to know what block to use to unpack vertex and actually render preview.

This seems orthogonal to interpolator packing. The data will be rearranged back to the same layout when unpacking interpolators anyway.

We thought about this. We’re not settled how exactly to do this yet. Probably some name-based solution.

This depends on the template design on the SRP side :slight_smile: Sounds like a reasonable thing to be able to customise.

1 Like

My thoughts on this:

  • Type-Based for Nodes (or guid based if node is asset)
  • port uid-based for master node ports. (master node ports must be stable)
  • some sort of shader inheritance so all shaders inherit baseShader and we can change that base shader in project to change all project. May be that BaseShader in other shaders looks like MasterNode
  • shader selection based on tags (e.g. add tags for all VFX shaders and then apply replace on tag)

I think we need concept of Shader output redefining(MasterNode as small subshader) and
concept of Shader input redefining (like redefine TextureSamplers, LightData, ShadowData, VertexData, LightmapData, MainTexture(well known texture input), PerInstanceDataSource…)

Hope this is useful :slight_smile:

P.S.: With this big constructor I with to, again, have OnDemand runtime shader compilation instead of current, hard to deal with, compile 100500 shader variants on build :slight_smile:

Nope :slight_smile:
This will all be Editor side, outputting a .shader file.

All noted and will be given thought :slight_smile:
Thank you!

1 Like

This seems very “shader graph centric”, does this work with text based shaders? Just don’t want to end up in another case where only half the tool works from scripting.

I think the trick here is around things like conditional compilation and which stage something is used in. In Better Shaders, the default thing is to just pass anything used by the shader over to the pixel shader. However, this can be wasteful if you are only using vertex color in the vertex stage, or only using it when some keyword is set, or unpacking one type of data into another type before sending to the next stage. So I’ve added various opt-in settings for stuff like this, where you can say “This is only used in this stage, so don’t pass it” or “add this #ifdef around this interpolator”. It’s not “performance by default”, rather “tell me some conditionals or guarantees and I’ll optimize this out”.

Ideally this would all be automatic, but then you’d run into the surface shader issue of having to generate all the variants to figure out what cases need what data, which basically capped that system to a small number of keywords or the generator would explode. Better Shaders attempts to do everything with a really dumb parser; and I think that’s actually good- but this is one area where more knowledge of the code would be useful.

The problem with global replacement is usually you don’t mean global. For instance, shaders are used in UI drawing, editor code, etc, and may not want to be modified. So usually these are scoped to something like “Surfaces”. I would suggest handling this via the blocks system - if Unity is going to ship their shaders as blocks to build from, then you can insert a global block and have all your shaders use that as the base instead of Unity’s.

What might be nice is if templates can have a list of blocks they include added on a project basis. Let’s take “Curved World” as an example- you need to add a vertex modifier function to every surface’s shader, but not ones for UI and such. You could modify the template to do this, but then when you upgrade Unity it breaks as the new template overwrites it. Or you could go to every shader you use and add the block. But both are brittle in their own ways. If instead, you could go to the “URP Lit Template” and add a project wide custom block there, then all the shaders would get recompiled with this new code automatically.

5 Likes

Yes, we’re working on syntax.

Depends on how you break up the shader into blocks - this can become N+M instead of N*M, which can be OK.

If you can base a template on another template, this one may work. This will need more thinking anyway :slight_smile:

Right- I had considered analyzing each stage separately - but because my parser is particularly stupid, it would need to be a lot smarter about things like shared functions used in any of the stages, etc. And the #ifdef case could likely be handled by tracing the control flow and pushing them onto a stack which can be checked, giving each use of, say, .vertexColor, a scope of defines it exists within. Having optimizations like these be automatic would be really nice, as even in traditional shaders you can spend a lot of time managing this kind of stuff, and conditional code compilation hides bugs like nothing else.

3 Likes

URP/HDRP exclusive?