Just downloaded this to have a play and WOW is it slow. I have a 11GB snapshot (big one for sure) but I can do nothing with it.
I clicked on one section for strings taking up 200mb (I think it was this size, could have been bigger) which seemed interesting enough to look into. It took, literally, 10 minutes to process the mouse click.
Once it loaded then I could look at thousands of strings in this weird layout but it felt useless since clicking on a string only told me its length, the first character and I’m assuming a list of references.
Since that first click took so long, I’m afraid to use any other functions now for fear of hitting another massive loading screen!
Hi there,
We are aware that the current table and tree map UI isnt as efficient as it could be. I’d avoid the Overview page and try just the All Managed Objects table. Though that might still not be that much faster. We’ve recently started rewriting the table UI to something more scalable, which will likely still take us through spring. The Tree Map will also be overhauled.
Having a more varied set of snapshots that where previously a pain to work with would be good to have something to measure against. So if you’re up for it, it would be great if you could zip that capture up and attach it to a bug report & ping me the issue ID so I can pick it up. Or if it’s still too big when zipped, upload it elsewhere, like Google drive or something and pm me the link.
Dang thanks for the fast reply!
Ah I deleted the snapshot cause I assumed it would be pretty huge but I’ll try and remember to look into taking another one tonight and upload it somewhere.
Unfortunately the All Managed Objects table is probably going to be useless to me as well, my project is using burst compiled jobs for the heaviest lifting so everything is (understandably) lumped into unmanaged memory. I was already fairly sure this tool wouldn’t help much in my situation I just wanted to use it to sanity check its not something else eating memory (its not).
As a side note, do you think there would be a mode in the future to swap jobs memory from unmanaged to managed so it can be analysed?
I fully appreciate this is likely no small feat and to elaborate what I am hoping for is an option (editor only is fine) as a part of packages like Collections and Jobs that would then run them using managed arrays. Of course it means performance gains would be gone but if it meant I could get some memory insight it would be really helpful.
Overal, I guess I am asking do you have an advice/plans to make it easier to profile memory issues in a large project using lots of jobs/unmanaged memory? Cause my alternative is tediously editing each job and/or manually logging native array lengths!
I know the DOTS teams are working to improve debugging and profiling capabilities via DOTS Profiler Modules that can be added to the main Profiler Window, as well as the Entity & Systems debuggers and there’s some backend work happening towards better DOTS & Temp memory profiling. Eventually that will make the package more useful for DOTS profiling too but yes, right now the focus is on getting a good base covering for the non-DOTS usecases. However, I’m at least not aware of a Native collection → Managed Array swap. Would the content of the collections be necessary/helpful for your profiling needs?
Or rather, what kind of problems are you running into and what extra info do you think would help you in solving these?
Well at the moment I’m having unpredictable crashes and the only info I get from the dump is that an external tool (FMOD for audio) is trying to access memory that it doesn’t have access to.
When looking at a build I see about 3GB out of 16GB in use but in the editor I’m seeing much higher usage closer to 12GB at times. I’m fully aware that the memory between editor and builds will be different but since its a vast difference I’m not confident that things like windows task manager is fully tracking unmanaged memory as a part of the application.
So to start, all I was trying to do was a sanity check in the editor of where my memory is and which jobs etc where allocating/processing the most to make sure things lined up.
As a note, this is quite likely to be a crash caused by an error in a job since I do have a few of those that appear rarely in the editor I’ve not managed to track down yet. I just wanted to sanity check that its not actually running out of memory first.
What would really help is seeing which classes etc the unmanaged memory currently belongs to. In my case I’m making a game similar to Minecraft and I keep a Lot of data around for things like chunks, lighting, caves, animals, biomes, insects, decorations, GPU instancing etc etc. Being able to see that the caves manager class is holding onto 1gb of memory in total would help narrow down the search.
1 Like
Oh sorry as for the contents of the collections. No I don’t care about the contents of the collections. Sure it would be nice to have but I’m more interested in seeing where that collection is held.
The only reason I was interested in the unmanaged to managed swap is because it seemed an easier way to work. Rather than trying to make the profiler work with unmanaged memory I could run the build/editor in a managed mode and use the same tools to see more info.
Right, that’s why I was asking. In terms of ease of implementation, there might be something somewhat easier that could be done in the package, at least for some non temp collections that aren’t just referenced from the stack. Gotta prototype something in January to verify that thought though.
1 Like
So, I’m going to be a horrible tease but: yeah something is possible.
For the next version tough, it’ll just be in the details for Managed Objects:
Do you have an idea if that would already help at least somewhat or still be pretty useless for your use case?
1 Like
Oh very nice!! 
Yeah I think this will be perfect because I allocate and hold onto the memory references ready to go into jobs so this will point those out.
Come to think, this is the recommended best practise anyway (sort of):
https://docs.unity3d.com/Manual/JobSystemTroubleshooting.html
So it should help most uses cases and only leaves memory generated in jobs as a question mark but honestly this would be a drastic improvement because that becomes a much more niche use case to track down.
Thanks again for looking into this!
The only other scenario I have a question around is would this work for detecting the memory within the job so long as we have a reference to the job itself? E.G:
Happy to hear that 
A follow up release will then hopefully provide even better clarify on the NativeArray allocations in the Allocations Table and Memory map, based on the same approach.
Yes. As long as something is still referencing the job, and the struct with the NativeArray isn’t an unsafe struct, the thing that is referencing the job struct will show all fields of the struct, including the Native Array.
Amazing, looking forward to seeing this in a future update! Do you have any rough ETA on a first release?
Yep, same ETA as the release mentioned in this other thread 
1 Like
And 0.5 is released, including that Native Array display in the Managed Fields, see the update notes here .
Looks great, thanks for letting me know! I’ve got some stuff to wrap up but will check it out over the weekend
1 Like
I found time to give this a try but didn’t make much progress
Crashed on me a few times and got stuck on loading screens for too long, one was over 2 hours long whilst trying to quit but it seems most likely it got stuck in an infinite loop.
I am seeing one interesting thing though that I’ve not been able to confirm yet due to the loading screens. First I see 3.69gb of strings in use and I can’t think of an immediate reason to have any large number of strings in the project. E.G I’m not assigning random/unique names to things for example.
However I do use this class extensively:
and I see it coming up a Lot in the profiler:
I get that this class isn’t really ideal since I’m making and throwing away a class with each call, each frame. Its a shame I can’t make structs implement a pseudo IDisposable! But that aside, do you think its generating a string each call?
In my head, that string is a const so it shouldn’t be generating each frame but given the info above do you think that is happening as a bug/unintended consequence? I realise as I’m writing this I might need to play with passing it by ref but surely I shouldn’t need to for a const.
Anyway this weekend I’m demoing the project to the public so I don’t have time to play more but after that I will try and play properly and make some example projects to try and isolate if this even a cause. Thought I would share what I have played with so far though.
Well, that string is send to the native backend each time, which isn’t ideal, and we have ProfilerMarker.Auto for that, which
A) doesn’t send a string to the backend on each Begin/End but only on creation and
B) Has its AutoScope struct’s Constructor and Dispose calls ignored for DeepProfiling so that it won’t trash the recorded sample structure when enabling deep profiling
But I’m unsure what kind of string mess could be caused by your solution
The long opening times (which is where I presume you’re hit with the long wait times?) might be down to the mesh generation for the Tree Map, which is terribly slow with lots of objects and needs to get improved/replaced. We already have that planned. I might still move it out of the Summary page for 0.5.1 to speed up initial opening at least.
Alternatively 0.5 also got some fixes on being able to read strings from the snapshot more reliably and it added those to the name column… I’d hope that wouldn’t impact performance too much on opening though (*), but it could affect Filtering and sorting on the Name Column…
(* If it can now correctly read more strings that would’ve otherwise been “Unable Too Read Object” entries… Yeah that might impact things as there’d be more string objects that the rest of the tool would need to deal with, e.g. for that System.String group in the Tree Map and calculating their sizes and such)
1 Like