I have a game project with a bit of an odd setup. I have a physical level filled with objects that use a set of shaders, which I use to visualize objects in the Editor, but also while debugging in play mode. I’ll call this set Editor Shaders.
When in play mode, I use graphics render requests in scripts to render objects, these objects are rendered with a second set of shaders, which I’ll call Play Shaders.
When I’m in play mode, I don’t want to see the objects that use Editor shaders. I can think of several ways to do this, but I’m not sure all of them are performant.
I can use a script to iterate through all of the objects and disable their rendering component. This would mean iterating through virtually all game objects, but once they are disabled I assume there is no ongoing overhead?
I imagine there is a way to tell the URP to ignore specific shaders, and I would appreciate if someone could point me towards the documentation on how to do that. Would that require a Custom Render Pipeline instead? Perhaps there is still a cost per frame involved in culling the objects with this approach?
Yes, if you disable renderer, then there is no rendering cost, but those components or materials still take just a bit of memory.
I guess maybe you could write renderer feature, or create script that sets some global keyword or shader variable so you can use different shader variant or discard pixels.
The real question is - what is your release workflow? Do you plan to keep all editor shaders, renderers and stuff on scene in release build? What is the plan to hide them?
In general it would be better to have nothing editor-stuff on the scene, but have some script to render those objects or just additional pass to render what you need. Or just use Gizmos or Handles to do that.
Thanks for the advice, I hadn’t considered release optimization.
I currently pull information out of Materials that use ‘Editor’ shaders, and feed that information into ‘Play’ shaders. I do this on level load at runtime, but there is no reason I can’t bake this information into a file in the future.
Another thing: I currently pull static lightmaps off of objects with Editor shaders and feed them into graphic render requests with Play shaders. In the future I’m going to experiment with transferring real-time lightmaps in the same way. I wonder if disabling the renderer on an object might exclude its real-time lightmap from updating.
But that’s a problem for the future. For now I’ll just disable the renderers.
Well, to be honest for me it’s hard to imagine your case and why you make it this way. Also not sure what is the goal of creating “editor renderers” and custom graphic requests, but I am pretty sure that correct workflow is to have “normal” runtime setup, then additional editor stuff is on top of that running only in editor, without additional component in the best case - like OnDrawGizmos and similar functions.
Also if you render stuff with your own script requests, not normal renderers, then you can check if Application.isPlaying and if it’s running in editor to skip rendering of additional stuff.
Yes what I’m doing is pretty weird and I appreciate you making suggestions. My game involves a nonstandard portal rendering solution that supports high nesting counts and nonstandard scaling, and it uses draw requests from script. Here is a little video of me hitting play, starting with real object rendering on, and then switching to the scripted rendering system I have built.