Making SRP shaders easier to write..

So, surface shaders have been discontinued, and Unity has shown no interest in writing a system to abstract shaders between render pipelines unless they go through a shader graph. While I could rant for hours about how short sighted this is, this post is about how to make the current situation for hand writing shaders better.

Ideally what we want is to be able to write shader code which is:

  • portable across pipelines (URP/LWRP, HDRP, Standard)
  • Upgradable across changes unity makes (SRP/Unity versions, new lighting features)
  • Abstracts complexity of managing all the passes required
  • Is reasonably well optimized
  • Hides the code we don’t care about

I recently finished writing an adapter for both URP (aka LWRP) and HDRP, allowing my product MicroSplat to compile its shaders for all three pipelines. MicroSplat generates shader code, similar to a shader graph, and was modified to support an interface so that each pipeline could decide how to write that code. I wrote the URP adapter first, and since it was similar to the standard pipeline it was relatively easy to understand what was needed. What the adapter does is:

  • Write out the bones of the shader (properties, etc)

  • For each pass

  • Write out the pass header, includes, pragams and such

  • Write out macros and functions needed to abstract the differences between URP and standard, such as the WorldNormalVector function, or defining _WorldSpaceLightPos0 as _MainLightPosition

  • Write out my code and functions in surface shader format

  • Write out the URP code for the vertex/pixel functions

  • This code packs it’s data into the structs that I use and then calls my code

With this, my surface shader code runs in URP. Given that shaders do not actually have structures, all this copying of data into new structures essentially compiles out, making it equivalently efficient in most cases.

I tried to take this same approach porting to HDRP, and after many false starts came to the conclusion that actually understanding the HDRP code in this manner would not only be extremely difficult, but make it a compatibility nightmare when changes occurred. I wanted something that could be easily updated when new versions of the HDRP changed things, so instead went with another approach.

Rather than writing out all of the passes and such, I’d export a shader from Unity’s shader graph which contains various insertion points for my code. The same basic issues arise- I need to add functions and macro’s to reroute missing surface shader functions and conventions, like:

#define UNITY_DECLARE_TEX2D(name) TEXTURE2D(name);
#define UNITY_SAMPLE_TEX2D_SAMPLER(tex, samp, coord)  SAMPLE_TEXTURE2D(tex, sampler_##samp, coord)
#define UnityObjectToWorldNormal(normal) mul(GetObjectToWorldMatrix(), normal)

Then copy their structs to mine:

      Input DescToInput(SurfaceDescriptionInputs IN)
      {
        Input s = (Input)0;
        s.TBN = float3x3(IN.WorldSpaceTangent, IN.WorldSpaceBiTangent, IN.WorldSpaceNormal);
        s.worldNormal = IN.WorldSpaceNormal;
        s.worldPos = IN.WorldSpacePosition;
        s.viewDir = IN.TangentSpaceViewDirection;
        s.uv_Control0 = IN.uv0.xy;

        return s;
     }

And in each pass on the template, call my function with that data.

While doing this I learned a lot about the internal way Unity is abstracting these problems in HDRP and allowing them to write less of the code in each shader graph shader. I think this approach could be improved to make hand written HDRP shaders much easier to write. With a bit more work, you could have a .surfshader file type which uses a scriptable importer to inject the code inside of it into a templated shader like the one I use and essentially have a large chunk of what surface shaders provide. Further, if LWRP were to follow the same standards, then porting from one pipeline to the other could also be automatic. To understand this, let’s look at how and HDRP shader graphs code is written:

A series of defines are used to enable/disable things needed from the mesh:

#define ATTRIBUTES_NEED_TEXCOORD0

Then code in a shader vertex shader can use this define to filter what attributes are used in the appdata structure. The same trick is used for things needed in the pixel shader:

#define VARYINGS_NEED_TANGENT_TO_WORLD

Then any code which works with these things can check these defines to see if they exist.

This allows them to abstract many internals and determine if various chunks of code need to be run. However, the graph does not take this code far enough if you ask me. For instance, the graph writes out various packing functions to pack data between the vertex and pixel shader, which if this convention was fully followed could be entirely #included instead of written. It also writes functions to compute commonly needed things in the pixel shader, such as the tangent to world matrix, but writes these functions out each time instead of relying on the defines to do the filtering. For instance:

SurfaceDescriptionInputs FragInputsToSurfaceDescriptionInputs(FragInputs input, float3 viewWS)
            {
                SurfaceDescriptionInputs output;

                ZERO_INITIALIZE(SurfaceDescriptionInputs, output);

                output.WorldSpaceNormal =            normalize(input.tangentToWorld[2].xyz);
                // output.ObjectSpaceNormal =           mul(output.WorldSpaceNormal, (float3x3) UNITY_MATRIX_M);           // transposed multiplication by inverse matrix to handle normal scale
                // output.ViewSpaceNormal =             mul(output.WorldSpaceNormal, (float3x3) UNITY_MATRIX_I_V);         // transposed multiplication by inverse matrix to handle normal scale
                output.TangentSpaceNormal =          float3(0.0f, 0.0f, 1.0f);
                output.WorldSpaceTangent =           input.tangentToWorld[0].xyz;
                // output.ObjectSpaceTangent =          TransformWorldToObjectDir(output.WorldSpaceTangent);
                // output.ViewSpaceTangent =            TransformWorldToViewDir(output.WorldSpaceTangent);
                // output.TangentSpaceTangent =         float3(1.0f, 0.0f, 0.0f);
                output.WorldSpaceBiTangent =         input.tangentToWorld[1].xyz;
                // output.ObjectSpaceBiTangent =        TransformWorldToObjectDir(output.WorldSpaceBiTangent);
                // output.ViewSpaceBiTangent =          TransformWorldToViewDir(output.WorldSpaceBiTangent);
                // output.TangentSpaceBiTangent =       float3(0.0f, 1.0f, 0.0f);
                output.WorldSpaceViewDirection =     normalize(viewWS);
                // output.ObjectSpaceViewDirection =    TransformWorldToObjectDir(output.WorldSpaceViewDirection);
                // output.ViewSpaceViewDirection =      TransformWorldToViewDir(output.WorldSpaceViewDirection);
                float3x3 tangentSpaceTransform =     float3x3(output.WorldSpaceTangent,output.WorldSpaceBiTangent,output.WorldSpaceNormal);
                output.TangentSpaceViewDirection =   mul(tangentSpaceTransform, output.WorldSpaceViewDirection);
                output.WorldSpacePosition =          GetAbsolutePositionWS(input.positionRWS);
                // output.ObjectSpacePosition =         TransformWorldToObject(input.positionRWS);
                // output.ViewSpacePosition =           TransformWorldToView(input.positionRWS);
                // output.TangentSpacePosition =        float3(0.0f, 0.0f, 0.0f);
                // output.ScreenPosition =              ComputeScreenPos(TransformWorldToHClip(input.positionRWS), _ProjectionParams.x);

                output.uv0 =                         input.texCoord0;
                // output.uv1 =                         input.texCoord1;
                // output.uv2 =                         input.texCoord2;
                // output.uv3 =                         input.texCoord3;
                // output.VertexColor =                 input.color;
                // output.FaceSign =                    input.isFrontFace;
                // output.TimeParameters =              _TimeParameters.xyz; // This is mainly for LW as HD overwrite this value

                return output;

            }

If instead of commenting and uncommenting these functions, they were simply wrapped in the define checks, this function would also not need to exist in the top level pass, and could instead be #included from some file:

#if VARYINGS_NEED_WORLD_SPACE_POSITION

float3x3 tangentSpaceTransform =     float3x3(output.WorldSpaceTangent,output.WorldSpaceBiTangent,output.WorldSpaceNormal);
output.TangentSpaceViewDirection =   mul(tangentSpaceTransform, output.WorldSpaceViewDirection);
output.WorldSpacePosition =          GetAbsolutePositionWS(input.positionRWS);

#endif

The same would be true of things like the structure definitions:

struct SurfaceDescriptionInputs
{
   #if VARYINGS_NEED_WORLD_SPACE_POSITION
       float3 WorldSpacePosition; // optional
   #endif
   #if VARYINGS_NEED_UV0
       float4 uv0; // optional
   #endif
};

If this was done very little code would have to exist in the actual output shader, only some defines that say what you are using from the included code and the code you actually care about.

There some squirminess about if we even have to #if around any of these- if the data is only computed in the pixel shader, then any of these values we don’t use would get stripped by the compiler. So really we don’t need a “VARYINGS_NEED_WORLD_SPACE_POSITION” define at all, since the compiler will strip those values and calculations if we don’t use them. In reality, we really only need to define what goes across the vertex->pixel stages (hull, domain, etc too), but Unity seems to output code that’s super specific here, so I’m following that pattern.

With that, a pass might look something like this:

#define ATTRIBUTES_NEED_POSITION          // allow position in AttributesMesh struct
#define ATTRIBUTES_NEED_UV0                    // allow uv0 in AttributesMesh struct
#define VARYINGS_NEED_UV0                       // Allow/copy to SurfaceDescriptionInputs struct
#define VARYINGS_NEED_WORLD_SPACE_POSITION
#define HAS_MESH_MODIFICATIONS           // Call my custom vertex function

AttributesMesh ApplyMeshModification(AttributesMesh input, float3 timeParameters)
{
   Input.uv0 += timeParameters.x;
}

TEXTURE(_MainTex);
SAMPLER(sampler_MainTex);

SurfaceDescription SurfaceDescriptionFunction(SurfaceDescriptionInputs IN)
{
     IN.Albedo = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, IN.uv0);

}

That now looks really manageable for a pass, right? There’s nothing about the code we have written that could not be run in URP as well has HDRP. There’s not pages of code that exists around it, slightly modified for every shader. Just what we care about. And none of that requires anything but some refactoring of the existing code that the shader graph writes out.

Where it gets really interesting:

So if we take this a bit further, we could write a ScriptableAssetImporter which takes this code and inserts it into each pass of a templated shader file, very similar to what the graph does anyway, but without all the commenting and uncommenting of code and structure declaration. The one issue here is that some passes don’t require computing all of the code. For instance, if you’re doing a shadow caster pass, you don’t care about albedo/normals/etc, unless those components have something to do with if that pixel should be clipped or not.

Luckily in many cases the shader compiler strips most of this code for us, so it doesn’t matter so much if it’s in there. The internal functions could provide dummy data to these structures when they generally aren’t needed with defines available to override these behaviors when needed. Something like PASSSHADOWCASTER_NEED_TANGENT, if for some reason you really want a real tangent in your MeshAttributes and SurfaceFragmentInput structures, instead of dummy data the compiler can use and strip.

So at this point, if both the LWRP and HDRP shaders followed these semantics, we have a shader that gives us most of the benefits we want. We can write something simple not thinking about passes and such, we can tell it what we need in terms of mesh and pixel data and be efficient about it, and we have something compatible with both pipelines assuming we’re not using things which don’t exist in both pipelines. Additional defines could be used to select which template is used (SSS, decals, etc) and enable/disable attributes of the structure and packing routines accordingly. You’d have to wrap your assignment of those in the same checks, but that seems reasonable. You lose the ability to name things in your structures, but that honestly seems like a win to me…

43 Likes

I would also note that it’s different for every case. For one case you may want to replace material properties, but leave lighting/fogging/etc untouched. For other case you want essentially a fully working original Standard/Lit shader, but replace e.g. forward fogging code or even just apply some fancy transformation on final color (or final GBuffer values).
That’s why in my opinion Surface shaders only fixed one part of the problem, not making the other easier.

This is actually extremely similar to the chunk system I’m currently using (I wouldn’t call it a pass, because this word already means too many things in gfx).

So in the chunk system you have templates and chunks.
Template for a forward renderer can look like this (pixel shader body only to keep it short):

                getAlpha(IN);

                getPerPixelWorldNormal(IN);
                getPerPixelViewDir(IN);
                getAlbedo(IN);
                getEmission(IN);
                getSpecular(IN);
                getGlossiness(IN);

                getAmbientLighting(IN);
                getDirectLighting(IN);
                getReflection(IN);

                combineColor(IN);
                addFog(IN);

Template defines general flow and order of execution. For a deferred renderer, for example, it can be different. They are high-level, they must be easy to read and understand, so you don’t have to jump over 30 include files to get the general idea. You should be able to implement one for a custom RP as well.

Each function is included from a separate “chunk” file, and you can override chunks. If you only care about overriding albedo, you write an albedo chunk, leaving other parts as default:

//@V_float2_TexCoord0
//@PARAM _Color ("Color", Color) = (1,1,1,1)
//@PARAM _MainTex ("Texture", 2D) = "white" {}

float4 _Color;
sampler2D _MainTex;

void getAlbedo(in VSData IN)
{
    pAlbedo = tex2D(_MainTex).rgb * _Color.rgb;
}

Here I’m using these “//@” comments as shader generator hints (same as #pragma usage in Surface shaders). “//@V” means it requires a specific varying (VS->PS) parameter. "// @ " adds stuff to UI.
Shader generator also does some minimal “smart” stuff, like removing duplicate variable declarations.

Now if you have a chunk interface like this, you could plug it into any RP, without caring for their tech details. Any forward/deferred/whatever pipeline has albedo, so it could work. Some types of chunks, however, are not universal, like the “fog” one, which won’t apply in a deferred pipeline*.

Vertex shader chunk template code is slightly trickier, because it depends on the pixel shader “demand”:

                #ifdef V_Normal
                    getWorldNormal(IN);
                #endif

                #ifdef V_Tangent
                    getWorldTangentAndBinormal(IN);
                #endif

                #ifdef V_TexCoord0
                    getTexCoord0(IN);
                #endif

It is basically the same, but there are these V_ ifdefs which correspond to “//@V” mentions in PS (which are conceptually similar to your ATTRIBUTES_NEED_TEXCOORD0).

Each vertex shader chunk can be replaced, just like the pixel shader chunk:

//@ATTRIB float2 TexCoord0 : TEXCOORD0;

void getTexCoord0(in MeshData IN)
{
    vUV0 = IN.TexCoord0;
}

And they also use the “//@ATTRIB” hint that extends VS input structure.

For this kind of shader generation I currently have some very minimalistic UI:

5452005--556431--upload_2020-2-7_0-35-21.png

Here I just set the chunks, the states and optional toggleable defines. I hit Generate and I get this:

5452005--556440--upload_2020-2-7_0-37-28.png

  • now when I think about it, deferred renderers can be fully chunk-driven as well, we just have different templates for materials and the full-screen lighting shader, but they can still accept compatible chunks (just reuse your fog chunk in the full-screen shader instead of the material)
7 Likes

Rarely am I concerned about albedo without being concerned about the other inputs to the lighting equation. The data setup and needs of, say, triplanar texturing can be shared between all components, but I guess if you can easily extend your VSData struct with new data then you can compute and pass along what’s needed in later stages. The biggest issue with this is that templates are order defining- for instance if you were doing POM you’d want to sample your height maps before you get to albedo, so adding POM to an existing shader requires producing a new set of templates with the new ordering. In MicroSplat I do something similar to this, but it’s much more function specific since it’s layering splat maps, snow, global texturing, dynamic streams, etc, and those all have huge ordering dependencies.

Anyway…

For me this issue is less about extending an existing shader with a bit of custom code, and more about abstracting the parts of the code you rarely want to change. And even more important than that, it’s making it future compatible. During the 5.0 → 5.6 cycle, unity changed the lighting systems specular response (to GGX), changed how shadows worked, added enlighten for realtime GI, added all the different modes for VR, added new platforms, etc. Each of these caused existing vertex/fragment shaders to break, while a surface shader in 5.0 still renders perfectly in 2019.3 today and automatically inherits all these existing features. That’s massive, especially for my use cases where I’m supporting users on many different versions of Unity and across multiple pipelines.

For access to the extra interpolators in the VS->PS (or VS->Hull->DS->etc) stages, I’d just predefine a bunch of ones which you can #define _NEEDS_VS_TO_PS0 into.

The fact that we even need to have this conversation (and that Unity doesn’t seem to engage in it, really. Once they decided twitter wasn’t the place for it and said to move it to the forums, then never showed up on the forums to talk about it.) always astounds me. This is basically like if Unity introduced a new Visual Scripting system and declared that you’d have to write all your C# in a unique language per platform (swift for iOS, Java for android, C++ for PC) because they didn’t want to maintain an abstraction layer anymore. It’s just silly, could have been easily avoided since they are already doing a lot of this work in their shader graph, and after 4 years of telling them I’m beyond frustrated… This type of work is far more important than adding a shader graph, as it’s the work which people license Unity to avoid doing.

7 Likes

In this case what I do is:

  • template: always call heightmap chunk before other map-reading chunks.
  • (before any PS code at all) always read varyings into temporary static variables which chunks are instructed to use.
  • heightmap chunk can alter the variable. if it’s not altered, compiler may as well omit it.

Having worked in an engine company for some time, I assume the reason is there is already way too much work to do :smile:
But hopefully the priority of this issue will become more apparent.

Having done the same, I actually think it’s a combination of existing momentum and lack of a vision holder who can make the call being intimate with the issue. There is always too much to do, but every day they don’t address this they are digging a deeper hole that will be harder to get out of. Having been in numerous situations like this, very rarely has there ever been a time where not correcting architecture issues as soon as possible was a mistake (only close to shipping a game, really). Compared to writing their own compiler for C# this is a tiny piece of work, and is mostly just a text based parser for the work they are already doing with the shader graph.

2 Likes

I think it’s more than just “too much work to do”… sometimes it takes someone to step back and to give something like this the cognitive priority to sort it all out. The whole scriptable rendering pipeline has been difficult waters to navigate and their incompatibilities make for distinctly hard decisions for user of Unity.

Jason ( @jbooth_1 ) is definitely the person to mentally corral all of this. In more than one instance, Unity has recognized when they need to bring in outside resources to make something happen. This seems like an ideal situation for such a thing. I’d feel a lot better about the future direction of Unity if this particular part of the puzzle was more under control, like proposed here. It remains but one part, but a very important part, to get right.

8 Likes

Unity seems to have organizational structure that make them stuck in bad decision for a long time, which then they have to patch hastily, and make the whole idea crumble in the end. A whole lot of organizational inertia. I see sign of them trying to process that, but they aren’t there yet.

I think it’s more that they are moving in so many directions at once. Compatibility between the different streams has taken a back seat.

The fact that you cannot switch between pipelines mid-project without throwing away all your shaders is a major flaw that will, IMO, heavily stunt SRP adoption for years and cause a major ruckus when Unity inevitably tries to “fix” it by suddenly removing built-in in a hasty decision (as we’ve seen they do multiple times already).

Even shader graph itself doesn’t properly work across pipelines. They have their minds set into matching UE4’s material graphs, but are failing to to replicate its reusability (use the same material all the way from mobile to ray tracing) while at the same time throwing away the flexibility Unity’s shaders and rendering had over UE4’s.

5 Likes

I’d love to see a response from Unity on this as well. The approach outlined on top of this thread feels like the next best thing to surface shaders and doesn’t seem to require fundamental changes to what’s already been done in the new pipelines. As a small team, we reaped enormous benefits from surface shaders over the years, with a ton of our custom effects surviving with zero issues all the way from Unity 5.2 to Unity 2019.2. If we have started with lower level shaders, we would have likely never made the jumps to each new Unity release or at the very least would’ve had to compromise on engineering time elsewhere, missing critical milestones and opportunities.

This level of convenience and future proofing also has incredibly important chaining effects. We adopted first iteration of instanced rendering back in 5.x as soon as it came out, jumped to each new Unity release that was required for updated DOTS, heavily leveraged DOTS, recently started leveraging new terrain system etc., directly provided feedback on them - we wouldn’t have had a chance to engage with those technologies if the migration to each subsequent Unity release involved reworking many of our shaders. I would even argue that surface shader framework is one of the most important parts of Unity that enabled us to realize our vision and get the development started - without the ease of iteration it provided, we wouldn’t have been able to prototype as quickly, wouldn’t have stumbled on an effective way of achieving some very specific functionality we needed, and would have likely failed to secure the future of the project.

And of course, this isn’t just about teams making games directly. I’d hate to see the Unity ecosystem become more and more sparse over the years due to lack of attention to this area. Over the course of the development, we have relied on a great number of third-party assets, including ones from @jbooth_1 . As far as we’re concerned, ease of asset development is as important as ease of game development, because third-party assets can allow small teams to punch far above their weight - be it with an amazing terrain shading (as if you had another full-time tech artist) or with great inspection and validation systems (as if you had a full-time tools developer), or with postprocessing, etc.

I hope this gets the attention it deserves. The things that have been developed by Unity over the past couple of years are incredibly impressive, but sometimes it’s hard not to be frustrated and impatient when small changes like ones proposed above could push that work so much closer to greatness.

18 Likes

@phil_lira or @Tim-C Can you please advise if Jason’s suggestions above are too much to ask. Maybe throw us a bone in light of all the other pain we’re going through with URP?

I wonder if it’s not time to start a community plugable render pipeline project. There seems to be enough need for the project to self sustain itself, and we can probably salvage current’s unity rp sources to kickstart it.

1 Like

I’m currently implementing a custom SRP, integrating it with the shader chunk system described. I wasn’t planning to make it “for everyone”, I just want it to be:

  • Mostly forward.
  • Not complicated.
  • Extremely flexible.
  • Optimized to render from multiple viewpoints in a frame.
    Basically being able to bend the pipeline to implement any weird effect (non-linear camera projections, portals, unique lighting responses for different objects, lots of custom shadows, etc) is important to me. Shader system needs to be extremely flexible as well to combine various global/per-material/per-view effects without rewriting shaders.
    Maybe some day it will also useful for someone also, but I’m not sure.
4 Likes

I have been thinking about this. Make our own SRP, with blackjack, hookers, and surface shaders.

I think a truly flexible SRP needs to be done using something like a render graph, so passes and dependencies are organized dynamically instead of manually. Otherwise the complexity will get out of hand due to feature permutation. Same for shader generation, like the chunk system @guycalledfrank proposed.

There’s the nasty problem of shader graph being hard-coded to URP and HDRP, tho.

1 Like

Graph based systems are inherently limited by both UI constraints and abstraction constraints. You couldn’t write MicroSplat in any shader graph invented; you might be able to write a system that could do individual permutations is what MicroSplat can create, but it would require a lot of custom nodes.

regardless, an abstraction around lighting would allow shaders to be written once and work on any SRP that still uses Diffuse/Normals/etc…

3 Likes

No, no, no. Not that kind of graph! This kind:
https://ourmachinery.com/post/high-level-rendering-using-render-graphs/

And this:
https://docs.unrealengine.com/en-US/Programming/Rendering/RenderDependencyGraph/index.html

Basically, a dependency graph based on the current frame that is used to determine what and in which order passes are to be rendered and async tasks are executed.

2 Likes

There was recently a discussion in graphics community about pros and cons of render graphs, started with this post:
https://twitter.com/longbool/status/1219438349527724032
http://alextardif.com/RenderingAbstractionLayers.html

I did something similar but simpler while working at PlayCanvas: a “layer” system, where each layer was basically a render pass, and they could be partially managed in UI (data-driven way). Similar to render graphs it had in some cases to analyze the whole structure to know resource dependencies and where which temporary RTs go, etc. Even though it was finished, it left me with doubts. Too much logic around user’s graphs can bring more bugs/limitations. Today I’m leaning towards the SRP approach - single scripted function where you can do ANYTHING - and I think, in case of “flexible SRP” it’d be cool to not build a rendergraph system on top of it, but still allow writing simple render scripts, just maybe add some helper include files with boilerplate to call.

1 Like

Yes, there’s nothing stopping you from building dynamics into an SRP. I think SRPs are the right approach and wrong execution, much the same as the shader graph. What should have happened is that they built a shader abstraction layer which takes input (code, properties, etc) from the user and compiles it into the current SRP- this layer should have been part of the SRP itself. Then the shader graph outputs to this layer, as does some parser which parses a surface shader representation. That layer shouldn’t care where those user functions come from, it should only care about abstracting the platform/lighting/passes/etc system details away from the user. Instead they built that entirely into the shader graph and closed it so no one else can extend it.

Ironically that is exactly how surface shaders came about - Aras built a text based version for a graph to be built on top of, then the graph was never completed. A graph could be built on top of a custom SRP…

5 Likes

I just decouple my render passes to separate scriptable objects, and put them in to the list. So i can change passes order, add new, replace entire specific pass(i.e. post process or shadow passses). For me it’s works fine.

2 Likes

I agree. The idea of making it easier to fully customize your rendering pipeline is great. Having no way to re-use high-level shader logic across customizations, not great at all. SRP needs a robust shader abstraction/generation system that can take user-provided snippets of high-level shader logic (graph or written) and insert them in the correct places in the SRP, while providing implementations for functions that said snippets can use without having to know SRP-specific details.

This way user-provided shaders could have a higher chance of working across SRPs, to the point a game could be built using different SRPs for different platforms, selected at build time, without having to maintain separate projects.

5 Likes