Will SRP, URP and HDRP eventually ends up using more and more bursted code and finally make the entire codebase fully burstable by default?
In URP, we use Burst when it makes sense (at the moment Decals and Forward+), but given our dependence on managed C# API the entire codebase cannot be burstable. We are not working on replacing existing systems with Burst versions at the moment. Are you seeing some specific performance issues with the code?
If I remember correctly, I always see URP and HDRP will have annoying gc issue regression issue that keep happening. It’s one of the main reason to make the entire codebase fully bursted to really make it not able to produce any gc anymore.
We certainly plan to use of burst in certain areas of the render pipeline code for both HDRP and URP. For example, we have “burstified” the best candidates of the render loop in HDRP, with huge gains in 21LTS, in particular leveraging it for decals and the light loop. And we will continue burstifying code paths where it makes sense. But is it not the plan to make the whole code base burstable this is also not necessary for optimal performance.
The decision to make something burstable or not depends on several factors but mainly how “hot” the code is (something that executes once a frame or only when say an editor inspector is open is not really on the “hot” path). Moving code to burst has an overhead on the code base, it becomes more difficult to maintain and extend, it becomes more difficult to read the code and the architecture is less clear from the code base. This has a significant impact on both internal an external developers.
For code that executes rarely there is no realistic performance benefit with burst so it is better to just keep it as good old c#.
I believe we have an automated test in our CI that checks for GC allocations. If per-frame GC allocations do make their way into a release, we consider those to be performance bugs, so please do report it if you see one
I do want to point out that as amazing as Burst is, it’s not a silver bullet for performance on its own. Although it is correct that it wouldn’t be able to allocate GC memory, Burst code can still have performance problems for other reasons.
I would like to know more about this. Are u refer to the interop performance issue that need to call from managed C# to native at burst and then native back to managed C#?
I see. I guess if in future need to go fully bursted code seems like it’s much better to just rewrite entirely to full dots codebase.
On HDRP we have decal and light that are currenty burstified and are trying to move more component like shadow.
Yeah, it needs to be a big enough chunk of code that the overhead is not too much. But what I meant was primarily that Burst cannot make slow code fast on its own, one still needs to actually write performant code ![]()
In URP’s Forward+, I think only the light indices generation are burstified.
In HDRP, I think the whole Light information generation is burstified to fill shader arrays. And it showed some amazing gains IIRC.
Will similar Light improvements be added to URP? ![]()
For URP Forward+, all the light culling and generation of the spatial data structure happens in Burst jobs. It might be interesting to do something similar to what HDRP is doing, but there’s some considerations to be had wrt. wider platform support. But we would first need to verify that it actually takes up a significant amount of URP frame time.
Does URP Forward+ still do the same task exactly like what u mention even using dots graphics or this task will move to dots graphics when dots graphics is enabled?
It works the same whether you’re using Entities Graphics or not.
May I ask how you do this ?
I once wanted to test for GC allocation per method to make a test for GC allocations.
I was trying to use this GC.[*memory] methods but it was not exactly giving me proper results.