Quite frankly a graph will never get there, and forcing us to use a graph for everything to make shaders maintainable is just wrong. You’re not going to force us to abandon C# and force the only text based language to be C++ when the visual scripting system ships, are you? If you want to make sure the graph can do everything, just re-write all of Unity’s hand written shaders with it. If a graph is good enough for us, it’s good enough for you too. Dogfood it.
As an example of why you will never get there, take something like the basemap generation for terrain. This is a really cool feature which can be used to do more than just basemap generation- it allows you to add passes which generate a render texture for use in your terrain shader. These can be anything you want, and use the tags to determine a name, format, and relative size of the render texture. Here’s an example from Unity’s terrain shader in case you’re not familiar:
Pass
{
Tags
{
"Name" = "_MetallicTex"
"Format" = "RG16"
"Size" = "1/4"
}
ZTest Always Cull Off ZWrite Off
Blend One [_DstBlend]
HLSLPROGRAM
#define OVERRIDE_SPLAT_SAMPLER_NAME sampler_Mask0
#include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/TerrainLit/TerrainLit_Splatmap.hlsl"
float2 Frag(Varyings input) : SV_Target
{
TerrainLitSurfaceData surfaceData;
InitializeTerrainLitSurfaceData(surfaceData);
TerrainSplatBlend(input.texcoord.zw, input.texcoord.xy, surfaceData);
return float2(surfaceData.metallic, surfaceData.ao);
}
ENDHLSL
}
Now let’s say I have to use the graph for maintainability and I want to write a terrain shader, which means I’ll need to write a basemap generation shader as well. But wait, this isn’t a surface shader you say, so it doesn’t count, right? Except that if my entire shader is written in a shader graph, I need to call that code from this shader, and the only way to do that is to support all of this in the graph as well. (Or constantly hack out the code I need every time I change the shader graph, which is a nightmare). And currently I can use this to do things Unity doesn’t use it for - like baking out procedural texturing into a splat map, or any other data I want to bake every time the terrain is changed.
This is where text representations just shine. Adding this functionality to the terrain system was likely pretty straight forward- read some tags from the shader, generate some render textures, render the passes to the buffers, set the buffers on the main terrain material, profit. Adding this same functionality to the graph would require a new master node with custom passes and settings, making the addition of custom features like these much more expensive for Unity. So if you really want to push everything through the graph, you need to dogfood it as such and stop writing hand written shaders, and begin the process of supporting all of these edge cases and in effect bring other areas of development to a crawl. Oh and don’t forget I could easily have written this system myself, so the shader graph system will need to support any non-surface shader system as well, since once my code is in the graph I’ll need to be able to call that code from any type of shader I might need.
You don’t want the graph to be everything to everybody- it’s not achievable, and it will just cripple everyone in the long run. It should be focused on what graphs are good for - shaders which are closely tied to the art. And you should be writing an abstraction for hand written shaders which allows them to excel that the things a graph just isn’t good for.
But to answer your question:
Since I write shader generators, I can basically switch anything in a surface shader very easily by generating different code. I guess it’s theoretically possible for you to write a system where I can dynamically generate a graph, but this seems pretty painful compared to just writing the code the graph would generate anyway.
-
Ability to understand the code the graph is going to write; graphs are an abstraction, and every abstraction means hiding information, which means you’re further from the code. This always has cost, and it’s very easy to have a graph hide this without you realizing. So much better information would be required here, like a code output window, feedback from the compiler on cost, etc.
-
Control over the V2F structure and how things move across the stages (this was limited in surface shaders in some cases)
-
Ability to perform work in the vertex function
-
Structs- wiring is just not maintainable through complex systems
-
Macro’s. I avoid these in shaders, but they make some things possible
-
Better handling of sampling- such that no node gets direct access to the sampler/texture but sampling nodes can be somehow chained together. Right now if you use a triplanar node it takes a texture and sampler- but if you want to do POM, they can’t be combined because it needs to texture and a sampler.
-
Ability to have thousands of shader_feature equivalents (requires ability to dynamically emit code, the way my compiler does, and #if #elif around it)
-
Ability to support multiple lighting models within a single shader (I support specular and metallic workflows, along with multiple BRDF"s, and unlit, switching between them with compile time generation setting various pragma’s and defines)
-
Tessellation
-
Pragmas, custom tags, etc…
-
Fallback and other special shader options (basemap shader, basemap shader generation passes, etc)
-
Instancing, including terrain instancing variants
-
Interfacing with Compute shaders
-
Proper branching, handling of derivatives
-
Ability to have custom editor GUIs
-
Access to the TBN matrix before lighting (I do things in a custom lighting function to blend normals)