ParticleSystem.Update using significant resources even for default particle system

Is the bug fixed in 2018.3 ?

I would also be curious to know when this is fixed because using 2017.3.1p4 (good version) vs 2017.4.12f1 (and more recent) has a big difference in performance with the particle system.

When using 2017.3, I run at 190fps but when I switch to 2017.4 I have half the performances (run at 90 fps).
My project is alsmost empty, but uses a lot of particles.

Hi everyone,

We (finally) have another update on this. Thanks for your patience.
The work has been completed and will be available in 2018.3. The first version it will be available in is 2018.3.0b7.

Due to the scope of the fix, we do not believe it is safe to backport to 2017.4 or 2018.2, as it could introduce other unwanted side effects. We have time to fix these kinds of issues during the 2018 beta cycle, but cannot risk shipping to our stable 2017 release cycle, in case it causes other issues. Apologies that this fix will only be available in the 2018 cycle.

To be clear:
The fix is in 2018.3
The fix is not in 2017.4 or 2018.2

3 Likes

Aargh, I am encountering this issue in 2017.4.10f1 in VR : ( I was hoping LTS would mean we get fixes to these quite important bugs…

I don’t understand that the backport is too risky. The risk already happened and it is in 2017 LTS in the form of a bug. Some of our projects are on 2017LTS because as the version name states, it is Long Term Support and we are only in the first year of it.

I understand 2018.2 not getting the fix. That wasn’t surprising. But 2017LTS not getting the fix?

So, to minimize Unity’s risk of not introducing new bugs to 2017LTS, I will be forced to change my project to 2018.3 and take the risk of introducing new bugs to my project? No disrespect, but this is simply wrong.

A particle system that is broken, that will not be fixed for LTS because of risk.

If it is “not safe”, then make one that is “safe”. That is what the whole Unity Team is for.
If it is “not possible”, then mark LTS as a build “not safe” for production.
It is either one of the two.

Please, I urge you. Look at it from the perspective of a developer and re-evaluate if this is an issue that can be over looked. Or drop the LTS name so that other people are not mislead.

I’ll try to explain it…

That is not what risk is. Or at least not what this risk is. That was some previous risk we took when we changed the code the first time. We can’t go back and undo that. Mistakes happen. Making further changes introduces new, different risks.

In a nutshell, risk is the potential for unexpected bad things ™ to happen as a result of us changing the code. We have already discovered 2 quite serious side-effects from this fix in 2018.3, for which fixes are in progress for. That’s ok because it’s still in beta. (although new bugs are never ideal and we make every effort to avoid them) If I had backported this, a lot of people would have serious problems with 2017.4 right now. Far more serious than this bug.

Unfortunately this is about you taking the risk vs. everyone who uses 2017.4 being forced to take the risk. I understand that there are a small number of users such as yourself who care passionately about this issue, and I sympathise with that, but that alone is not enough for us to risk making changes that affect all Unity developers on the LTS version.

How will we know what is safe? Some bugs can’t be fixed in 2 lines of obvious code. I wish it was that simple :slight_smile:

I am truly sorry but there is no potential for it being backported. The change is large, bugs are still being fixed with the solution, and the surrounding codebase has also changed in 2018.3, making it non-trivial to even begin to integrate into 2017.4.

I know this must be disappointing, but I’ve attempted to explain it as fairly as I can - there will always be some bugs/problems that can only be fixed in future versions with new work/redesign of the code.

Best regards,

2 Likes

I really want to take your word for it, but you are not making any sense.

What do you mean you can’t go back and undo that? Nobody here asked to go back to previous code. We asked for a fix for a known issue. Are you going back to old code for fixing it in 2018.3? I am guessing not. I don’t understand where this “undo” popped out from, unless I missed something.

You are mixing two different things. The first is about taking the risk of implementing a fix that may introduce new bugs. The second is fixing a known bug for a LTS. The two are different. The first can be handled, like you said, by working it out on 2018 Beta, which according to you is fine. (I don’t get how fixing a bug in a beta is more important than fixing the same bug in already released build but I will bite) The second can be done on LTS’s own schedule. It can either be a backport of 2018 when the fix is mature/stable or a completely new one. Eitherway, the two (how to fix and will you fix) are unrelated. Those two have nothing to do with whether a bug in LTS should be fixed or not fixed. The fix can take time to arrive, and I will understand, but dropping support for LTS midway is just unprecedented.

Nope, it is about Unity taking the risk vs the users of Unity. If you are worried about a change in code effecting every one else, then work on a different repo, release a beta of a LTS. Do whatever that needs to be done. I mean, come on, there are archives of past versions. If this worries you more than actually fixing it, then separate a branch off of it. Again, there is no risk for users other than LTS versions having known issues decided not to be fixed. The risk already crashed for us developers. The only remaining risk or cost is on Unity.

Never said it was simple. It is a matter of “it needs to be done.” Imagine you buying a car, and a part of it is broken, and the manufacturer says, "well how would I know if the new fix is safe? this can’t be fixed in 2 days - I wish it was that simple. And then he says, “just buy next year’s model”.

Also, if being safe is what is stopping this fix, then why are you fixing it for 2018.3? Aren’t you worried that it will not be safe? I mean how do you tell when it is safe? Are you sure you can get it safe by 2018.3’s full release? I mean seriously?

So you are saying that the real reason behind this is because the work required is too much and we were actually talking business rather than coding risks. I am really starting to hope this whole conversation just took a wrong turn and all this is a misunderstanding…

It is not disappointing, but makes me wonder if I made the right business choice to use Unity. I understand that you are in a difficult position, and I am guessing the decision to abandon this bug was not made by you. What worries me is that that “future version” you mentioned is LTS. A long term supported build of the game, not introducing new features but fixing all bugs out over 2 years and providing a stable engine build for production.

Again, please, please fix this for 2017LTS.

So I waited a few weeks for 2018.3 to come out due to this post. Did this get fixed as mine is shocking! Im using the particle as dirt spray on a racing game. I have reduced it and reduced it more and more and it still runs rubbish. Its literally a bog standard particle that sends out 3 per second with a very basic shader. The game runs a solid 60 fps on Xbox X and as soon as a single particle appears it drops to high 40s. Using deferred lighting by the way.

I recommend using some profiling tools to diagnose the cause more precisely. Or submit your repro project as a bug report and we’ll take a look once it’s been processed by QA.

I believe what richardkettlewell was trying to say that the current problem of particle system that the 2017LTS is using, is not due to a code/programming bug, but a code design flaw, for which in order to address the performance flaw, it commands a major rewrite to most parts of the particle system, which leaves little or no room for stress testing, and backwards compatibility can not be guaranteed.

I am currently using 2018.3 but the the use of particle system is still a CPU performance issue for my current project but due to different reasons (mainly the drawing and rendering). I was wondering if Unity is looking into further optimizations or off-loading the generation of vertices and calculations to the GPU instead of the CPU.

Oh and with regard to the prewarming of particles, does it have to occur on every OnEnable()?

4 Likes

For the current Particle System, no, we are not looking at making any further significant optimizations.
However, we are developing a new, graph-based way to author particle systems, which performs all computation on the GPU. It’s in preview in 2018.3: https://unity.com/visual-effect-graph

Yeah, unfortunately that just “how it’s always been” and we know it can cause performance spikes. We were recently asked about adding a faster, cruder pre-warm mechanism, which was less accurate, but far faster. I’d like to add this so you can reduce the stall in many cases. It would be even better if the prewarm could happen asynchronously on a thread, but that may be too large a change given that we are now shifting focus to the Visual Effect Graph.

I want to let you know that there is currently a very serious bug with VR and particles in 2018.3 and 2019.1, caused by the bugfix discussed on this thread.

This is the bug:

We are working on a fix as a priority. Apologies for any inconvenience caused.

1 Like

It is not that I am misunderstanding that this could be difficult. What I am pointing out is that this is LTS. That means people choose LTS because it is meant to be supported long term. It means that projects will be started and ended with this particular branch because it is LTS. Now, given that, if a known issue is decided to be not fixed because it is hard and it has risks…well then it beats the whole purpose of the LTS. I might as well as use the 2019 Alpha, I mean seriously is there a real difference if known issues are to be handled this way? Why claim long term support when the issues that require long term support will not be worked on?

This is just ridiculous.

@richardkettlewell
Hi Richard, is this particlesystem update issue fixed in 2018.3? I’m confused by the tracker page https://issuetracker.unity3d.com/issues/shuriken-particle-system-waitforjobgroup-cause-huge-fps-drop-after-upgrading-from-5-dot-6-to-2017-dot-1 which states “Fixed in Unity 2017.3.0f3” and your responses to jjejj87, basically saying it wouldn’t be fixed for 2017.4, and only to be fixed in 2018.3?
I’m currently getting a lot of slowdown(the profiler shows WaitForJobGroupID as the culprit) with several particle systems(with admittedly large quantities of particles each) in 2018.3.0f2 and it appears to be similar to what has been described in this thread by everyone else, but I don’t know if this is a just pushing the limits of the system or if this is a continuation of that(is WaitForJobGroup the same as WaitForJobGroupID?) issue.

There is a fix for this issue currently being tested. The version it lands in will show up on the issue tracker link in the next few days:
https://issuetracker.unity3d.com/is…als-null-error-and-lead-to-graphical-glitches

I just had a look at the history of that bug, and it’s actually for a different issue (but it’s easy to be confused, especially as the public page isn’t specific enough for you to know it’s a different issue). That bug is actually a regression caused by us doing some work at every simulation step that only should be done during each rendering step (imagine a prewarm may run 1,000 simulation steps to prepare a particle system, but it only gets rendered 1 time after the prewarm. That bug was due to a bad change we made that moved some code from rendering to simulation.

Maybe it got mentioned in this thread because other people mistook it for the WaitForPreviousRendering stall that this thread focuses on too (too many posts for me to check)

The WaitForPreviousRendering fix is definitely only in 2018.3 and newer, no matter how much @jjejj87 stamps his feet about it :wink:

2 Likes

I am still getting this WaitForPreviousRendering lag (upto 100ms) and I am on 2018.3.12f.
However, I only get hit if using MultiThreaded rendering, without MultiThreaded rendering the lag is not there and (consequently) performance is much better.
I dont think this issue is fully resolved.

I’m trying to get to the bottom of performance issues and noticed particles taking almost 3ms of my frame time whilst “WaitForJobGroupID” which lead me to stumble across this thread.

Frankly, this all feels like a mess even in 2021 LTS. My main issue here is clarity so I will explain my situation.

I know I am GPU bound so I’ve been spending time analysing builds in RenderDoc and optimising the biggest offenders. However before I take on a task of optimising the draw calls I make a point of disabling the drawing of things as a quick win to see what the result of the optimisation could be. I start disabling things until barely anything is drawing and I’m still not hitting 60FPS consistently.

Hmm… must be CPU bound as well, back to the profiler. I first check the editor since builds take me at least 5 mins and since I’m testing performance, and the profiler takes a lot of CPU time, I make release builds for a more accurate test. Meaning I can’t just target the build with the profiler without making a new one.

What doesn’t help is seeing a big whack of time spent on particles each time I look at the profiler. There is no way for me to know from the profiler alone that actually, did I make a change to particles? I have to go and disable all particle systems to confirm that no change is registered.

Plus I now I have to make a new build to see if I see the spike outside the editor, which means I also have to make tools to disable particles at runtime to test the changes.

I shouldn’t have to do this. This is Horrible for clarity. What I want to see is like when profiling builds and just an overall spike for Gfx.waitForPresent or something like that because without it this cost is incredibly misleading. It also makes the particle module of the profiler almost pointless. What if I turn on a really expensive particle system, is it suddenly going to be half the cost of waiting for a job? If so when i first turn it on it will appear to cost nothing. Only way for me to know is to disable rendering and look for the change and I should not have to do that. When profiling what I need is accurate information that leaves no room for doubt.

I think, from a profiling perspective this setup is fundamentally flawed. I don’t think there is a name you can give this sample without it being misleading. E.G “Wait for last present” would lead my first thought to be that particles specifically took a long time to render last frame, which seems not to be the case.

Overall the system itself feels flawed. Yes the spike will move but if I have no particles in a project the particle systems manager shouldn’t be stalling the main thread waiting for a render to finish. Surely this should instead be handled by some sort of frame timing manager instead.

I’ve lost track of how many times I’ve got confused by this spike and wasted time disabling particles before having to search online for what is going on