“What has gotten harder is writing portable/upgradable shaders. The only option for that is Shader Graph”
…are why I even felt the need to ask for a clarification that you’re not removing shaders. Because your own staff don’t seem to get it. (that quote is from Unity “Tech and Rendering Lead”, who doesn’t seem to realise that saying the “only option” is ShaderGraph, for a problem where that obviously is not true, has a huge negative impact and leads people to - quite logically - interpret: “this person is actively trying to get rid of shaders”, even though he said in the same tweet that he intends to actively prevent shaders being got rid of. If you say “I don’t want to change things. But the only way forwards is to remove things”, then people will assume you intend to remove them, even though you said you don’t want to).
Thanks for all the info.
Getting URP to full parity with the Legacy RP(or built-in render pipeline) is important.
Please give some love to WebGL. Using URP in WebGL projects is a mystery. You can’t be sure what will work and what won’t. And in some cases the performance is 30% worse in URP than in the Legacy.
You need to restore the lighting falloff in URP to the default legacy values, for realtime and baked lighting. It doesn’t make sense to have the same lighting values for HDRP vs URP, or to break so abruptly with builtin. At the very least expose the functionality in the GUI instead of forcing us to hack the shader and place override gameobject scripts in every scene.
I’m guessing that some features like Camera Stacking and Point Light Shadows were removed from URP because they are fundamentally non-perfomant. If that’s the case, it might make sense to require them to be manually enabled in a project somewhere in the GUI, or at least explain the fact that they are non-performant. Its not too difficult for example to use spotlights instead if you integrate them into your design from the beginning.
I can’t overstate how happy the mention of surface shader equivalent programmer workflow makes me. Lack of this has been our #1 gripe with URP/HDRP and knowing this gap might be filled in 2021 makes us seriously consider these pipelines for our upcoming project.
@Elringus
100% no grab-pass in any unity in-house, or 3-rd party render pipelines. It’s just impossible with modern graphics engine architecture. Only with ancient OpenGL 2 era architecture, when you draw objects one by one.
Cynical/fearful reader here. You are right, I assume exactly that !
I rely entirely on the HLSL and ShaderLab standard in my development. I do not want to see how it is get sacrificed to please the artists needs. It is already happened to ue4, setting up “real shaders” there is a nightmare, and it is the main reason I keep using Unity.
I would like to keep full CODE control on all the shader stages (vertex, hull, domain, geometry, pixel) along with custom structs, buffers, funtions and so on…
So I agree, we need more concrete signalling here.
This is a good step in the right direction. All essential graphics features that aren’t in preview should be integrated into Unity Core. I’m glad these packages are still open source and developers can use their own custom forks instead of the default one. I also think adding cross compatibility is a really good start for integrating HDRP and URP into one project. I do understand why two render pipelines are great for communicating compatibility and performance expectations. Although I think it would be best if URP and HDRP are integrated into one package. Maybe label the demanding features of URP as HDRP Graphics, like in shader graph, vfx graph, or in the post processing settings. These HDRP features could be color coded, and the developer themselves could check the platforms they want to use HDRP for (like HDRP for PS5 and URP for PS4 and Mobile). There should be a alternative URP feature for each HDRP feature used in all assets, and a warning should be thrown if the developer doesn’t implement an alternative.
Also, I believe there’s way too many HLSL shaders in the Unity Ecosystem to ignore. Ambitious legacy projects can’t just be upgraded to URP in a timely manner because most projects use a lot of HLSL shaders. There’s thousands of assets, many Unity created themselves, that depend on legacy shaders to work. I propose Unity to work on an official HLSL to URP converter. This would help bring compatibility to older projects and tech demos that released only a few years ago. Or maybe HLSL shaders could work naturally with the new URP.
When all these goals have been met, I’d like to see Unity upgrade many of their own tech demos. The Blacksmith, The Adam tech demos, The 2018 Book of the Dead demo, and the recent Heretic demo should all be updated to work seemlessly with Unity 2021 and beyond.
First off: Thank you for this post. This is the first step I have seen Unity take to acknowledge what is has been a critical misdirection of development within the engine in the last few years.
As an asset store author, this is absolutely the best decision you have made in a long time regarding the SRPs, and this alone will make authoring and supporting users much easier. Thank you.
Just jumping in to say that this should include basic Post-Processing. A blit is a blit, a rendertexture is a rendertexture and that’s all that base post-processing should be. When you broke Post-Pro between pipelines, that was the hardest hit for many asset store Post-Pro authors. When I discovered this, I almost gave up on my asset, which has helped hundreds of Unity users push the boundaries of the engine in terms of post-processing performance and visuals.
There is a lot of comments on here about not clarity on our planning around shaders. I just want to call out this one point from the original post:
We need a programatic way of expressing shaders (in text format) that can work across pipelines, have include files, use existing libraries - surface shaders are a good abstraction maybe there are better ones, we want to be sure we take the correct approach. When we have specifics we’ll be sharing so that we can get early feedback from asset store publishers and other users.
Would a player setting here work for you? If you upgrade a project you get the old behaviour - if you create a new project you get the new behaviour. You can opt in to the new behaviour at any stage, but it might change how your content looks?
These features are coming to URP (camera stacking already exists in 20.1 ). It is possible to build them to be performant and robust - it just takes a lot more effort than just copying the built in implementation.
For example in the new camera stacking we did extensive design around user workflows to make sure it was clear what is supported and what is not supported. In built-in there are things that just don’t work (try stacking a deferred camera after a forward camera) but Unity lets you do this and there is no warnings or similar. In URP we are really trying to make sure that it’s not possible to configure things in a bad way so there are less surprises when you are developing content.
My main issue has been writing shaders for the new render pipelines
Writing shaders by hand for URP/HDRP is completely undocumented
Shader Graph is far from having feature parity with writing shaders by hand
The code generated by Shader Graph is really difficult to read/use/parse as a reference for writing your own shaders
These three things combined makes for a massive barrier of entry for writing more technical shaders, and that’s like, coming from me who’s pretty hecking used to writing shaders in Unity at this point - for someone less used to writing shaders, then, to them, whatever shader graph doesn’t natively support is effectively not a feature that exists in Unity
The way everything worked in the built-in pipeline was of course not perfect, but, it was at least always something you could find documentation on, or looking at others shaders as a reference, or Unity’s built in shaders, and even though some solutions were a bit wonky, there was a solution to find and a way to do the thing. (Of course, some of this is a factor of time and it having existed for years)
Anyway, thanks a ton for the writeup! Looks like things are moving in the right direction <3
First of all, the incentive for open communication is much appreciated! It’s a lot to take in!
I feel that for the past ~2 years users have been on the receiving end of a graphics team’s passion project. Ever evolving/refactored and largely undocumented. This in itself is not an issue, since I understand there is a need to battle test software in the hands of users. But the “production ready” label communicated something different entirely, though I observe this a more of a structural flaw between engineers and marketing. Granted, some mistakes were made, but we’re all understandably human. In the end, issues wouldn’t have come to light if Unity sat on the SRP for a few years before releasing it to the public.
The rise of the SRP’s did cause quite a bit of friction for me, since all my assets up to that point (largely graphical) would be rendered obsolete as URP would become the new standard. Seeing how many implementations between the built-in RP and URP are different, this meant reworking them from the ground up. Which proved impossible without breaking compatibility with the built-in RP. This meant having to build a separate version of an asset, or build a abstraction layer for C# and shader rendering. I eventually ended up doing both, but took up the majority of my time, yet resulted in nearly identical assets.
I’ve grown more hesitant to release new assets, due to all the additional work involved.
It’s worth noting that the asset store backend poorly supports version-specific assets. And having to backport new features or fixes to X number of projects, test, and upload them erodes the motivation to work on something. Generally, publishers prefer to stick to 1 package for the minimum support version, and implement version specific code through define symbols. Fortunately, since 2019.1 it’s possible to declare define symbols per package version, whoever thought of this is a hero!
Also, for this, the Version.hlsl is a godsend, as it allows to implement code for a specific SRP version. The need for this can largely be mitigated by shader code abstraction, which becomes more important with this kind of set up. Fortunately, I see that has not gone unnoticed!
This is still a paint point, since any asset that does anything remotely graphical (eg. rendering a height map from the scene) requires a widely different approach for things that amount to the same result. A more concrete example would be Camera.SetReplacementShader. In the built-in RP, this works as you would expect. However in the URP, this specifically requires a ScriptableRendererFeature, which can only be set up through a UI. Thus requires C# reflection to automatically set up without exposing the end-user to a set of instructions they first have to look for. My point is that (temporary) rendering in the editor has been inadvertently convoluted. Next to automatic setup (UX <3) requiring hacks.
So any effort towards harmonizing this will go a long way!
Shaders
The differences between UnityCG and the URP shader libraries are understandably large. I’ve seen many people complain about the lack of documentation regarding this. Personally, I didn’t mind, waning through all the shader code and figuring out what’s what did take time, but I also found it a create way to learn how everything was put together. I found the URP library to be a create improvement, as implementing lighting was more straightforward. I don’t, however, dismiss the need for documentation, I still often use the Unity manual for this.
There’s a definite push towards Shader Graph, but I feel this falls on deaf ears for shader programmers that prefer to write by hand. This probably affects assets store publishers more than anyone, since in some cases a shader needs:
Unity version specific code (not so much nowadays)
Platform specific code
Third party integrations (includes/pragma’s)
Complete control over what’s done on a per-vertex base
Tight control over keywords (using keywords in Amplify Shader Editor for example still adds redundant variable declarations or calculations)
This is all in line with the principles of wanting/needing to support the widest range of use cases.
Shader abstraction
The obvious issue is that there is none. Again, this largely applies to asset store creators, looking to support multiple render pipelines with minimal maintenance overhead/file separation.
I’m looking forward to seeing how Surface Shaders 2.0 are going to shape up! Though, it’s a hard bullet to bite knowing that this will exclude the built-in RP and will require the minimum supported version for assets to be raised to 2020.x to support this. In light of moving forward, I suppose there is not other way around it but a different approach.
Right now, there is a definitive lack of macros being used, making functionality like space transformation explicit. For example, built-in RP’s UnityObjectToClipPos amounts to the same thing as SRP’s TransformObjectToHClip. Yet, because the name and HLSL source files are different, these require two separate shader files.
Abstraction through macros I think would also take some pressure of breaking changes, since the “front end” would remain unchanged.
Update guides
For the URP 7.2.0, an update guide was posted, outlining the made changes. This was a great incentive! As someone who has a hand-written shader on the store, I’m required to diff-check all changes to figure out why and where my shader broke.
I want to encourage this more! From what I gathered the upcoming SSAO implementation sees several changes in how shadows are sampled, which should not go unmentioned. It took well over a month for many shaders to be updated to take the shadow changes from 7.2.0 into account (including Unity shaders such as terrain). These changes were not included in the update guide, so I wanted to bring this to attention and its consequences.
I’m not asking to return grab pass. What I care is something that will allow re-implement what was possible with built-in. It’s all about functionality parity, nothing more.
Thanks for being active and providing feedback there. I can definitely see your comments on the Blob Shadows, so we do have it. We are using an external tooling for the public board and unfortunately it doesn’t track returning users on the page. But you have a very valid point and I agree with you on that users should at least be able to see their votes and comments on a card. I’ll forward your feedback to the ProductBoard team.
Cool thanks for checking. As a stop gap solution, perhaps emailing the user their comments would be a good option. At least that way I’d have a local record that I’d commented.
As an artist and a creator for the asset store, I had to release my massive photogrammetry pack in 2 versions, Standard and HDRP. It was a pain.
Why? HDRP stores its textures in different ways. The auto-upgrade wouldnt work because the detail texture is completely different in HDRP, and its broken too (Roughness overwrites instead of multiplies). Then I realized, after I had already published 2 versions, that I could have made some Amplify shaders that grabbed the bitmaps in the old renderer’s format and used them in the HDRP pipeline and grabbed the correct detail map. DOH!
So why not, instead of making an auto-upgrader alone, make a “replace this with that” list that the upgrader can use to follow the instructions of the asset creator? Or for simple HDRP shaders, provide a method in the material menu that allows it to specify what channels it gets what data from.
It was a nightmare for me to get people to ‘register’ my package’s translucency index to the HDRP package. Why is that my business? You should do it. You have access to the meta files so just do it yourself! Computers are supposed to do things like that. Half the screenshots I see of my assets have neon green plants because they didnt follow the instructions I provided and probably just thought my art sucked. And speaking of registering stuff, I never did figure out how to get terrain to use a shader that had the depth-test turned on. I’d drag it in there and it’d reject it, so I rage quit. (default shader is outside the asset directory and cant be changed).
Having ‘default HDRP settings’ in the project settings is confusing. People wont figure THAT out without an angry day of wondering why effects are happening that they didnt put in their sky-fog volume.
I dont ever have luck with your 100,000 LUX lights. I try, it looks bad, I give up. I try again another time, it sucks, I give up. I’m not the only one that has said this. In fact nobody I know has gone with the massive-intensity+exposure crank-down method Unity seems to be pushing and I dont even have a clue what the benefit is.
How about providing 10 foliage and grass shaders, 10 water shaders, 10 hair/fur shaders, 10 pre-adjusted glass shaders, and stuff like that so creators dont have to learn shader graph/Amplify just to release some assets? I released a couple bushes and one tree and lost 2 weeks trying to get shaders that would wiggle the leaves and I never did figure out how to make it work with a wind zone. You wont have good assets on the store if I have to re-invent the wheel for the simple stuff. Lots of artists have enough of a burden learning and using the insanely complex details of modern game art and damn it, I just dont have the time to learn to be a good coder along with that-- Not for the money I’m seeing.
Oh and you’re anisotropic shader is broken because if you mess with the ‘tangent map’ (undocumented) it changes the normals that the shadows use, which is like rotating the normal map in UV space. Its only supposed to affect the gloss!
And would someone please program a detail-mesh shader at long last? I see the Heretic team write insane shaders yet I cant place little rocks in my map. Took me 4 hours to figure out why they were white-- No shader!
I guess in the end my view of HDRP is that even after a year of developing for it, I have no idea how to use it and no idea why you arrange stuff how you do. Its a spiderweb of things pointing to things that will scare off newbies. I have no idea what best practices are. You need to watch a decent artist try to get an HDRP scene working and create some stuff for it without you interfering to see what I’m talking about. 9 out of 10 will rage quit and knock over your donut table on the way out. HDRP Unity is no longer the ‘easy to get started engine’.
First I’d like to thank you all for writing this. I feel like I’ve been screaming into the grand canyon for two years with the graphics team on the other side going “What? Did you say everything is great? Yeah, we think it’s great too!”. I’ve been maintaining compatibility with some version of HDRP and LWRP/URP for almost two years now, and it has basically prevented me from developing new features, and stopped me from making any new products for the UAS because of the support requirements. I’ve also removed a product because you closed off the API to the shader graph after I released a node for it, and essentially stopped working on two assets because maintaining compatibility with more than one SRP asset is impossible.
This doesn’t personally affect me much, but I’m curious as to why this is the case. As long as the update loop gets called and the shader is HDRP compatible, what’s the issue? Conceptual purity? Being able to have particles which just work across all 3 render pipelines seems important for the next few years. So either having having the old system work in the new, or backporting VFXGraph to the current pipeline. Doing neither means every particle system breaks when you upgrade to HDRP, which is what you’re trying to get away from.
So while I totally understand this, there’s some difference between conceptual purity and practical usefulness. As an example, users of mine often fight with Unity’s lighting model with terrains, particularly with a sheen caused when viewing terrain at a glancing angle. There are two things which contribute to this; one is the minimum metallic value (0.04 or something) in the metallic workflow, and the other is from fresnel. To solve the metallic issue, I have an option to run the shader in Specular workflow mode, where internally everything is still computed as metallic workflow, then I do my own metallic to specular conversion and remove the 0.04 minimum value. However this still does not solve the fresnel issue.
In my conceptually pure world, my terrain shader would not support UV scale (texel consistency), normal strength (blown out normals make the above worse), etc - but in practice artists want these controls. So anyway, I’m not saying exposing full blown per-material controls for the light loop make sense, but if there are details which allow for practical control of things like the above to be better controlled that would be useful.
My first asset store product was done as a vertex/fragment shader generation system, and the experience of upgrading that through the 5.x cycle was so painful that I stoped developing that product and made MicroSplat and swore to only use surface shaders from then on. So SRP’s were a massive slap in the face, returning me to Vertex/Fragment upgrade hell, but on two pipelines instead of just one. So for me, this is by far my largest issue.
I think most stuff can be achieved using a ScriptableAssetImporter for any SRP, and a common library of functions to help parse files and stuff the resulting code/properties/cbuffer data into the correct places. Basically wrap blocks of stuff in BEGIN_STUFF END_STUFF blocks, and use some kind of parser.GetBlock(“STUFF”) to grab it and put it into the resulting shader file. (Ideally this allows multiple stuff blocks, so you can grab CBuffer properties from multiple files, etc).
I do believe that the shader graph and lit shader should all go through this same abstraction layer. This is the only way to ensure consistency while enforcing maintenance and feature parity. Surface Shaders and the Standard shader do not do this, and because of that there are lighting inconsistencies between them. I know this will be a massive refactor, but it will be one that pays huge dividends in the end. I cannot stress this enough- if these operate as independent systems, there will be divergence and regressions, and none of the systems will reach their full potential.
Surface Shaders had a lot of funky stuff and strange bugs, but over time I found ways to implement almost everything in them, and the feature set of MicroSplat is far greater than my previous product at this point. Here are some things to consider:
Terrain/Object blending
To blend objects with the terrain, I need to adjust the lighting such that we interpolate from the terrain normal to the object normal over some blend area. This effectively means modifying the resulting world space normal from the shader. The way I do this is certainly funky- but it works. Basically, I run the shader in the custom lighting overrides and blend the TBN matrix there and convert back to a final tangent space normal which the lighting system takes as input.
This is obviously not ideal, but leads me to believe that all parameters should be passed to the inout so that if the user wants to modify them, they can declare their parameter as inout and do so. You wouldn’t think the user would want to change the TBN matrix, for instance, and would likely pass that as read only by default. The current structure of the shader code makes this very hard in SRPs, since much of the raw data is computed via include files, and passed as read only through many different functions, or copied into structs and passed. As such, I currently don’t blend the lighting in SRPs, because unrolling all of that code to add the modifiers would make updating to new versions harder.
Tessellation
In practice, tessellation causes a ton of issues- but people love it. Tessellation has been a problem with surface shaders, in that it forces you to not have access to your fragment Input struct in the vertex stages, so any system which needs to compute data in the vertex stage and pass it to the fragment stage won’t work with tessellation. (I think the reasons are obvious- not having to generate code for that data to pass between domain/hull/tess stages, ect). Also, when Draw Instancing was written, no one updated the tessellation stages to pass the instance ID, so now I have to disable it if Draw Instance is enabled. So when considering things like how the user will access custom vertex data, or compute data to pass to the fragment stage, please consider that this should be viable with tessellation on as well.
AppData, FragInputs, ViewDir, etc
Surface shaders allow you to customize these structures in a lot of ways, and I’m betting that makes the parser a lot harder. It also has magic keywords in these structures, like viewDir, which changes what space it’s in depending on if you write to o.Normal or not. That’s all very funky and a cause of a lot of confusion, and this is a place where I feel the code output from the shader graph does better.
For instance, the shader graph fills out a struct with all kinds of common things, like TangentSpaceViewDir, WorldSpaceViewDir, etc. I suggest that the ScriptableAssetImporter that loads a .surfaceShader file and converts it into the actual shader does is to scan for these names in a users code, and include them if they exist. If the user actually names a variable that in their own structures, that code would get stripped by the compiler if it’s not used anyway. This means the user can just i.WorldSpaceViewDir when they need it, and it’s just there, already computed for them when they need it, and not if they don’t use it.
The same system can be used for AppData and other structures. I do not think the user needs to explicitly name these variables, a fixed set of names will be fine. For arbitrary data passed between the stages, some kind of Custom0, Custom1 convention seems fine to me. This will hopefully keep the parsing code simple, while making more shader code consistent across everyone’s shaders. Something like this also makes it easier to support tessellation, since all of those structures have fixed naming conventions, etc.
Modern HLSL
I’m a bit old school in my HLSL usage, but I believe it now supports interfaces and more modern constructs. These could be wonderful to enforce specific contracts between SRPs, and in custom shaders. For instance, having a common interface for SRPs to use between them, ensuring that both pipelines have a common set of functions for things like space conversion, getting the camera position, etc. Then if a user wants to write their own SRP, they know exactly what functions they need to account for to make shaders compatible.
Customization
Surface Shaders contained a lot of funky ways to override things, via pragmas and magic functions. A more formal contract here would be nice. Virtual/override would be amazing, but I don’t think that’s available in HLSL. Instead, something like:
The parser can do a GetBlock(“URP_LIGHTING”), and if it exists, we use our custom lighting. If not, we include the default model.
Shader Graph vs. Text Shaders
You will never reach parity with text shaders in the shader graph. For instance, if the user wants to get data from a compute buffer, you could add a bunch of nodes to do that. Or a new master node for terrain shaders. But with each of these, you add a mountain of new nodes and code to support the feature. In contrast, adding any of these to a text based system requires no new code on your end. Shader Graph should not try to reach parity with what text based shaders can do- it’s an abstraction to make writing shaders easier for a specific domain and audience. Rather than spending thousands of man hours doing this, focus on opening the API for users to expand, and increasing the ability for shader graphs to easily work with text based code chunks. A large part of Unity’s value is in your ability to expand it, and closing off the node API puts all reliance on Unity to provide every feature a person could want, while stifling innovation. A large part of graph workflows is having coders and artist work together on custom nodes, turning complex shader code into simple nodes. Neither of these is possible right now (And don’t you can make the code in a bunch of nodes and turn it into a node- that doesn’t provide options, a clean UI, or access to things like dynamic branching, compute buffers, etc).
Grab Pass, RenderWithShader, Custom Passes
My Trax asset (for MicroSplat) relies on RenderWithShader. I have been unable to get this working in URP/HDRP using CustomPasses and the other potential replacements for this. Conceptually it makes sense - insert some custom code into the pipeline at a given moment - but in practice I cannot get it to work correctly. Unifying this kind of stuff, making sure you don’t have to use reflection to add data to the users configuration, etc, will go a long way. But ideally, this is dogfooded with practical examples a lot more. CustomPass and its URP equivalent are not really designed to render from different cameras, but you can hack the camera matrix to do this, but that’s not very user friendly for the average user. And I can’t even get the right results out of it, and I’m at least decent at this kind of stuff.
Grab Pass is an interesting issue. Having a user just arbitrarily stall the pipeline when some object gets drawn is an issue I totally see you wanting to not have. But the basic tenant of “Hey, I want to put a sync point here where we read this stuff back” is a valid use case, and whatever hooks you add such as “After opaque objects” are not going to fully cover all use cases. So my one though here would be to have some mechanism to insert some kind of custom sync point into the rendering in a way that is predictable- perhaps by allowing the user to bucket objects into a before/after stage. On the other hand, as ugly and slow as it is, it hasn’t stopped people from shipping products with it in all the time it’s existed. Not every product needs to run as fast as it can.
Misc
As an example, the camera has a background color property. You edit it, it works. But in HDRP, if you set it via script, nothing happens. This is because the editor script makes it seem like that field is still being used, but actually when you edit that in the editor, it edits a color field on the HDRPCamera component, which shows no fields in its editor. This is extremely confusing. If the HDRPCamera component is going to own this data, then it should be the one showing this data in the editor. Stuff like this makes it extremely confusing to work in the SRPs, and when each SRP does different stuff like this, its maddening trying to work across them.
Honestly I think having your teams split between shader graph, SRP, URP, and HDRP has done you massive harm. These teams not only need to be working together, they need to be using each others stuff, and agreeing on how these things are implemented as one team. How many people at Unity have implemented something that has to work across all 3 current pipelines? Kept it working through changes? You’re own assets in the store have 2 star ratings because you can’t keep up with the changes we are expected to keep up with, and we don’t even have documentation. Your teams should be taking demo’s like the URP boat demo, porting it to both SRPs, and keeping it as a live working example through changes. Your team needs to feel the same pain we do.
This is great news but please adopt Shader graph as the main official way of using and creating shaders just like Unreal does since Unreal 2.
This especially should include one for the Terrain system as well. Just give beginners default shaders made with graph they can learn from and change, but the custom unity shader approach is just one more layer of complexity, confusion and incompatibility that just needs to go. The time spent making a custom shader nobody can inspect or later use because it dosnt fill any specific requirements could also be spent on making a new shader graph node and you dont have people asking “when is this coming for graph as well?” - We are looking for almost a year for a tech artist and are willing to pay good money but its just not realistic to have one. Rendering engineers can still write their own but that’s like a unicorn also. I’m not saying kill writing custom shaders, no, improve the foundation - I’m saying totally kill unity made custom shaders entirely for everything but very specifics like text rendering.
Lets be real, nobody is using the standard / terrain shaders in professional capacity aside of background decoration pieces because if you have the simplest custom requirement its already not viable - and beginners definitely dont want to use channel packing to just test something. Stay on one shader pipeline, just do all things with the graph and cut the extra work and the absolutely unnecessary complexity. I could make terrain materials 15 years ago in Unreal and its still not viable in 2020 in Unity. I know your team is hard at work at a new great terrain shader which then again is not compatible and not extendable and dosnt fill any specific requirements again and outdated the day it releases, no matter how great it is, thats just reality. You cant make a shader for us like you cant make a game for us. You can make great helpful examples in graph which double as standard however and you can make great feature compilations in nodes.
HDRP lighting is also an absolute pain and insane mess of confusion. Exposure working inversely. Different values in default than in sample. No link between physical camera exposure / light and the “other” values. Auto exposure on by default. It took our environment artists 2 weeks and we still are in utter confusion if this is acceptable or correct now after looking at example scenes and the new tutorial multiple times. Something needs to happen with that exposure approach UX.