If you’ve got any full or mature projects which have garbage collection issues - that is the spike, not the allocation, is causing a stutter or hitch in your game, then please send it in full as a bug report and prefix the first line of the text box with FOR GARBAGE COLLECTION TEAM.
Unity is looking for real-world examples of garbage collection actually causing a gameplay spike rather than just minimising the allocations. This will help the team pinpoint areas of pressure and minimise that in Unity.
My two remaining sources of per-frame / per-tick allocations are:
CharacterController.Move() → making ControllerColliderHit a struct, like RaycastHit, would solve this. Why it’s currently a class is a mystery. Adding another field with a stable rollback position for the CC would also be extremely helpful. Conversely, some of the current fields are redundant and could be removed.
Physics.OverlapSphere() → Like many methods, this should have a form where you pass in an ICollection<> / IList<> that gets filled rather than instantiating a new array on every call. This method is extra crucial as it’s one of the cornerstones of doing robust physics.
You don’t need an example project to see that the allocations from these things cannot currently be avoided and will eventually trigger the GC, with the spike interval based on the number of active objects, and the spike magnitude based on how long the GC blocks other threads.
If Unity seriously wants examples and data, maybe someone working in the UT should start/move this thread in support area?
It’s not very promising that this is in gossip section and announced by “outsider” (yes, I know hippo has good connections to Unity but still…)
In particular: they do not want “method X allocates” reports - they want projects that actually suffer from GC spikes.
This is because the allocations are only half the problem; if you submit something that has actual spikes in it then they can analyse them and may be able to improve the situation by tweaking the GC itself.
Without any disrespect meant, I believe this is kind of silly. The GC problems are well known, even to UT and as admirable it might be that they actually want to try to improve things, there is not much point in hacking and doctoring around in this ancient and obsolete GC. Better use those resources to solve/work around the Mono issue.
Again, don’t want to step on anyones feet, had to say this, because its getting ridiculous.
What the does that even mean? Should I go back and make my code worse, allocate iterators / boxes everywhere, put in a bunch of LINQ queries, convert my structs to classes, remove pools, uncache the things that require upfront allocations, etc, etc so that I suffer more GC in order to matter to them?
What is the definition of actual? A GC spike every 6 seconds? Every 3 seconds? Twice a second?
They “in particular” aren’t interested in fixing the APIs that make certain per-frame allocations necessary? Or are these fixes already on the agenda, and if so, what is the timetable for their release?
What’s the plan here, to tweak the frequency of GC? To segment memory based on types and run different GC schemes on the various segments? To write a high quality, generational GC based on the concepts of Hotspot or other top notch VMs? Even if that’s the case and they are actually successful, such a thing would be a long, long way out and fixing the APIs would STILL be beneficial.
If good code had ALL zeros instead of mostly zeros in 99.5% of frames, there would be a lot less suffering under any GC.
Real world application sometimes raises issues that you won’t find by deliberately writing something to generate obvious, known repeatable errors. Most questions that go unanswered are due to nobody having the answers because they’ve had different experiences.
I imagine this is what they’re after - not the obvious stuff, but the things that aren’t easy to fix because they only appear under certain circumstances. Perhaps I’m wrong, but that’s what I read from it.
Incidentally, what’s the maximum upload through the bug reporter? (is there one?).
No, though if you could retrieve a copy of your project from source control prior to you doing all those optimizations then it might be worth sending that in.
Whatever you consider a problem. If it’s spiking once every 5 minutes but you’re finding that it’s noticable and harms your gameplay experience then submit that. If it’s spiking every 30 seconds but your game is such that it doesn’t really matter then don’t submit that. Etc.
The purpose of this is not to make an engine that ‘has no GC spikes.’ It’s to make an engine that can be used to build and ship great games. If the GC behaviour is preventing you from doing that, then they want to know about it. If it isn’t, then why do you care?
I have no idea. But the point is: it’s not difficult for them to go through all the APIs and identify for themselves which methods do and do not allocate memory. What they can’t do by themselves is examine the way that all the various aspects of memory usage within real-world projects come together to result in a GC spike - not without being given real-world projects to examine.
It’s true that reducing allocations will make things better regardless. But: doing so is not free - both in terms of Unity engineering time, and in terms of impact to the end user (people still have to change their code to use allocation-free versions of API functions and so on) - and doing so may not be the most bang-for-the-buck thing to do right now. Tweaking the GC might be a lot faster and simpler than changing the public API; neither you nor I can know that. The only ones in a position to make that call are UT themselves. So the request is that we give them more information about the actual problem (‘my game doesn’t run smoothly’) rather than us deciding for them that allocations are the things to focus on.
Of course they perceive a problem. They’re not looking for proof that the problem exists - they’re looking for real-world examples of it that they can study to better understand it. Identify patterns. And so on.
Hey, 37GB Assets folder right here. If you want to give Unity your project, and Unity want to look at it, then between you I’m sure you can find a way to make it work. NDAs, private FTP servers, this stuff has all been done before. You can file a bug report, and explain in the report that your project is too large to submit and you’d like to arrange another way to deliver it, and support can work with you to get it done.
Artificially introducing allocations would be counter to the point. I think they already do use those projects, but they’re not real-world.
That’s fine. If you have a single scene that demonstrates it, this is great too. For ongoing issues, people are sometimes added to beta, so if you’re interested in being involved, just send a project which has a spike that affects gameplay.
Using the vuforia plugin it constantly allocates 1.2mb from I believe readpixels, but have to verify that’s the method. Just running the basic plugin example scene with pretty much any image effects drops fps too much on mobiles (< 20fps). When I asked vuforia about it they said their product is best in class, blah blah blah, no concern from them. How much allocation is considered a lot? Constant 1.2mb a lot?
Yeah, the wording sounds like this is almost to say “See, there is no GC problem, because real devs don’t get spikes” and I really hope that’s not it. The “real devs” have removed their spikes already by coding around the GC when they see it happening; that doesn’t mean that it’s ok. We shouldn’t have to code around the GC to begin with.
Rather than “tweak” the shitty mark/sweep GC, they should throw it out and write their own generational GC. There are a lot of papers out there on how to write a generational GC. It sounds like relations with Xamarin have deteriorated to the point where they have cut ties entirely, so if we are resigned to not using the Xamarin GC or the Microsoft GC then, as much as it pains me to say it, I think the only real option is for Unity to start writing a new GC.
Then I kind of feel like this isn’t the best way to go about it necessarily… they want projects from devs who own Pro and know how to use the Profiler and have a game that’s almost finished, and who found GC spikes that make their game unplayable but for some reason never bothered to fix them. I feel like that’s a small and not very “real world” sample.
Personally, I would do just what the first post says not to do. Use established test cases like doing a bunch of string allocations and see how it compares to the Xamarin and MS GC’s. They could use the same test cases that Xamarin itself used to showcase the difference between SGen and BDW. I guess as long as they’re doing both standard test cases and random people’s games it’s fine.