Hi, Thank you for the opportunity to ask about SRP!
While sub-optimal from performance pov, GrabPass is immensely useful in some areas, such as visualizations or when performance doesn’t matter that much (title screens, cutscenes, etc).
Current alternative for GrabPass in URP with injecting custom pass via static config asset doesn’t cover the case where a dynamic way to access screen content (backbuffer) inside shader pass is required.
Let’s say we need something like layer blend mode in Photoshop, while the layer is a game object in Unity. If it’s single object, implementation is straightforward: grab screen content with an after-transparent custom pass to a texture and draw the object using that texture as the blend source. Now imagine we have multiple such objects stacked on top of each other; each object should not only blend with the after-transparent grab texture, but with the previously drawn objects as well. The objects are added and removed dynamically at runtime and we have to grab screen content after each such object is rendered. Something like this is trivial to implement with GrabPass in built-in, but impossible in URP, as we can’t insert/remove custom passes at runtime and sample screen content at arbitrary points.
I have a plugin on Asset Store (Blend Modes | Particles/Effects | Unity Asset Store), which provides that kind of game object blending via built-in renderer. It can’t be completely ported to URP due to the aforementioned limitation. I’ve also received a lot of feedback from users, mentioning they are not switching to URP only because it lack proper multi-pass and GrabPass alternatives.
Are there any plans to allow authoring multi-pass shaders (similar to built-in) and provide a way to access screen content (backbuffer) at any point inside the passes in URP?
I’ll try to shed some light on the current status of Grab-pass and have a colleague answer your multi-pass questions.
The grab-pass is something that comes up regularly. It’s a valuable feature due to its ease of use. That being said, it brings a lot of scary implications that we are not fond of. In the scenario you described, a texture copy has to happen after each object that is being rendered. Which is expensive to say the least.
We don’t like how big of an impact a single line in a shader can have such massive implications.
Another issue is that there is a bit of a gap between how Built-in would handle this compared to the SRP’s. In Built-in, a lot of the frame is driven by the shaders. In the SRP’s, we have our ScriptableRenderPasses and ScriptableRenderFeatures that build up the frame. Having this grab-pass in the shaders would not fit in this workflow.
But, we agree on how easy and quick to set up this feature is. Our vision here is to create some utility functions or other features that allow for an equally easy setup, but a slightly more explicit one. That way we offer the usability while also making clear to users what is actually happening.
We really want to avoid users spending that much performance without them knowing why and how it’s happening.
So in short. We hear you, and we want to offer an alternative to grab-pass. But in a way that it fits with the SRP workflow.
If that can help, we implemented some multi-layer refraction in URP by using different Renderer.renderingLayerMask on objects, and copying Color buffer in-between each layer. It’s not as flexible as Grab Pass. Most importantly, it does not handle dynamic sorting, say your camera rotates around multiple objects on different layers. You’d have to have a script that handles objects switching renderingLayer depending on camera distance…
Every decision in rendering/shaders comes with performance tradeoffs and the ones with GP are a little bigger/more extreme than with other features. There is nothing bad about that.
Those of us writing shaders have been saying continuously for the last 3 years that the grab-pass performance is not an issue, has never been an issue.
Is there anyone, anywhere, who actually has a real problem with the performance? It seems to come up again and again as the ‘reason’ why GP still hasn’t been ported, but I haven’t met anyone outside Unity who wants/needs that.
There are a few reasons we didn’t want to support Grabpass in the way it was implemented in builtin. The concept of something appearing in a list of objects that could inject arbitrary rendering logic at execution time makes doing any kind of frame optimisations in advanced very difficult.
We decided that the better path here would be to try and make sure we had better alternatives to the same use-cases. Obviously we have not done that justice and still needs a lot of work, but we are certain we do not want to implement Grabpass as it was in Builtin, frame interruption and pass injection of an undefined nature is not great, and it can kill mobile perf.
The OpaqueCameraTexture in URP and distortion/refraction in HDRP were some of the features to solve use-cases. In URP we would like to look at making the OpaqueCameraTexture more useful by not having it predefined as to where it is injected and not limit to only a singular point, also adding in HDRP like distortion support and our of the box refraction is also a path, also things like frame buffer fetch being easier to use can be used for some things.
This solves some of the use-cases but not all, and one of the biggest blockers we face here is having a good list of all the most common usages of Grabpass to help us define and look at alternatives and where to spend time looking at proper alternative workflows.
We know of very few:
Distortion(heatwave etc)
Refraction
Color Blending within a render queue
Definitely list any you have in this thread, as it is always much more helpful to know what the use-case is rather than what the feature is that is needed, since there are always multiple ways to do the same thing and we want to make sure we are providing ways that will not hurt future plans with better streamlined performance.
As for the other question here, multi pass, SRP itself supports this 100% already, it’s just that our SRPs do not, which is again due to our own design ideas on where we want each to be, but this is definitely not great when there are so many useful use cases where this is needed. In URP we are looking at making the whole frame(all the passes) a lot more customisable and modular once we have done the shift to RenderGraph.
This shift to a more modular and swappable system will also allow us to make better customisations, when it comes to multi pass shaders, all is needed is to pass the list of shader passes that you want render, so for example the URP shader passes are hardcoded to the ones listed on this page, in the future we would like to make this not so hardcoded so that any amount of passes can be rendered, leading to custom multi-pass shaders being easily supported.
I want to stress that the queue here has to be dynamic; eg, something like URP’s render feature won’t work, because the passes have to be injected/removed/ordered dynamically at runtime based on game objects location relative to camera and/or renderer components sorting order. I think the example with Photoshop layer blending where layers are game objects is the best way to describe this use case. Is this something you think can be made possible with URP in the future?
Unfortunately you just explained exactly how GrabPass works in the backend, which is what we want to stay away from.
This is the tricky thing as you could have GrabPass and non-GrabPass objects(shader pass) in the exact same queue only being separated by the sorting order, this means coming into any rendering path they need to be checked for, separated out, a new pass injected, rendered, then the existing culling results split and the existing pass re-queued and the frame then continues on until the next encounter which is unknown without specifically tagging it before and separating at that point.
This happening can be very taxing, unpredictable, stops any advancements in batching, render pass optimisations on tiled hardware, at least how it was used in the majority of project reviews Unity has conducted it was one of the main points of performance issues.
We can definitely have a path in there that works like this if you were to write a replacement Opaque or Transparent pass for example, it will just opt out of almost all optimisations for that entire render queue, this is not something we would have out of the box.
In saying that, being able to say, this specific shader you want rendered after this queue, but before this queue and have access to the color buffer is something we need, the queue points should be customisable but they can never be dynamic unless you can figure this out before rendering anything, then this is going down a very specific path which might not solve all use-cases.
Now being able to dynamically inject passes is already a possibility, but you cannot inject a pass within an existing pass, so unlike GrabPass that could split any pass up anywhere any amount of times(which is proven to be a substantial impact to performance) the idea would be to pre-define how many splits you would want, and what Shader Passes to render at the splits, which would be rendered in a pass that also injects a copy of the current screen beforehand and feed it into the shaders. This of course means more work on the projects side as it’s impossible to create a single solution that fits all use-cases without going back to a dated technique like GrabPass.
URP should definitely have better options out of the box, like pre/post transparent which could map well to refraction/distortion, and also be able to add these ‘Split Points’ to existing passes without having to re-write/override URP passes but without a decent list of use-cases there is no point in designing something that doesn’t actually solve any real world problems other than replicate what GrabPass did.
This is one of the main reasons we need to understand how GrabPass was utilised to make sure for the majority, we create something useful that fits well with a modern performant rendering concept, and it may mean multiple new workflows to solve what was a very generic system.
I see, so basically what was possible with built-in will no longer be possible in URP/HDRP and ultimately with Unity when built-in is deprecated (building a custom RP for an asset store plugin is not an option). That’s sad to hear, but I’ve seen this coming and was ready to deprecate the plugin anyway. Thank you for clarifying this.
It’s not technically impossible. You can make a ScriptableRenderFeature that does basically does this:
Have two rendertextures that you can blit between (I will call them A and B). And then you just loop over the objects you want to draw. Sample buffer A output to B. Then for the next object you sample buffer B and output to A. And repeat until your objects are drawn. That’s a costly way to do it, but if the performance is of no concern (like with grab-pass) then this might help you out. The draw command allows you to set an overrideMaterial or overrideShader to have them then render with your special blend shader.
I’m not recommending to do it this way, just want to help and find an alternative solution for your specific case!
I ran into the problem that “OpaqueCameraTexture” does not work as expected with camera stack (multiple cameras) as it only returns the RenderTexture of the base camera.
This leads to the background being invisible in refrections in my case.
Is it somehow possible to get the RenderTexture after multiple cameras rendered into it?
2 years late, but I’ve finally looked into the suggestion and no, that wouldn’t work for my use case, as I’d have to iterate individual objects to draw, basically resorting to manual rendering, which means no culling, no stencil buffer, etc.
And in all those 2 years it doesn’t seem anything changed, still no hope on restoring the killed feature, while URP is now the default renderer in Unity 6.
Hopefully, Godot won’t make silly decisions like this.