Path(s) to root

Is there some way to easily see a managed object's path(s) to root? If not, this would be a very useful feature for finding leaks. Best I figure out now is to guess my way through the references, which often get too convoluted to follow to root, unless I'm lucky/have a good idea of what's going on already.

These are displayed in the panel in the lower-right, as shown in this screenshot:

If it's empty, the object either has no path to root or it's another bug in Unity's Memory Profiling API, or an issue in Heap Explorer. You can also try Unity's new Memory Profiler to see if it works there:

Do you use .NET4 scripting runtime? This contained various bugs in the memory profiling API the last time I tested it.

Hey Peter. I was asking about the new Unity profiler. I have used Heap Explorer in the past, and it's been great (so thanks for your efforts there!), but it's been having issues recently (so maybe .NET 4 runtime which we do use or Unity 2018.3 changes). I do get a lot of errors from Heap Explorer like the following, so that might have something to do with it. I know you have stopped development and open-sourced it, but I have not had time to try to debug what's going on.

ERROR: 'System.Collections.Generic.Dictionary.Entry<System.Int32,System.Globalization.CultureInfo>[].baseOrElementTypeIndex' = -1 is out of range, ignoring. Details in second line
arrayRank=1, isArray=True, typeInfoAddress=4239CAA0, address=3C337BD0, memoryreader=MemoryReader, isValueType=False
UnityEngine.Debug:LogErrorFormat(String, Object[])
HeapExplorer.AbstractMemoryReader:ReadObjectSize(UInt64, PackedManagedType) (at Assets/HeapExplorer/Editor/Scripts/MemoryReader.cs:447)
HeapExplorer.PackedManagedObjectCrawler:SetObjectSize(PackedManagedObject&, PackedManagedType) (at Assets/HeapExplorer/Editor/Scripts/PackedTypes/PackedManagedObjectCrawler.cs:595)
HeapExplorer.PackedManagedObjectCrawler:CrawlManagedObjects() (at Assets/HeapExplorer/Editor/Scripts/PackedTypes/PackedManagedObjectCrawler.cs:335)
HeapExplorer.PackedManagedObjectCrawler:Crawl(PackedMemorySnapshot, List`1) (at Assets/HeapExplorer/Editor/Scripts/PackedTypes/PackedManagedObjectCrawler.cs:77)
HeapExplorer.PackedMemorySnapshot:Initialize(String) (at Assets/HeapExplorer/Editor/Scripts/PackedTypes/PackedMemorySnapshotEx.cs:1088)
HeapExplorer.HeapExplorerWindow:ReceiveHeapThreaded(Object) (at Assets/HeapExplorer/Editor/Scripts/HeapExplorerWindow.cs:983)
HeapExplorer.ReceiveThreadJob:ThreadFunc() (at Assets/HeapExplorer/Editor/Scripts/HeapExplorerWindow.cs:1034)
HeapExplorer.HeapExplorerWindow:ThreadLoop() (at Assets/HeapExplorer/Editor/Scripts/HeapExplorerWindow.cs:783)

[quote=“Skittlebrau”, post:3, topic: 742399]
I was asking about the new Unity profiler.
Oh yeah, sorry I totally missed that. For some reason, I thought you private messaged me. I didn’t recall I subscribed to the Profiler forum. My fault :face_with_spiral_eyes:

1 Like

Path to root is in our backlog and needed to reach feature parity with the old BitBucket memory profiler so we'll need it before we'll move out of preview. I can't give an ETA though.

A learning from Heap Explorer: Finding root paths in more complex projects can take a really long time. In Heap Explorer, that loop can make a lot of iterations, often taking multiple seconds to finish.

I normally add "loop guards" to while(true) kinda loops to avoid that a bug causes the application to run forever and I assumed at the beginning that this loop should not run more than 10000 iterations or so. I then increased it to 100000, 200000, and users still reported the loop guard kicks in. Then I increased the limit to 1000000 iterations and moved it to a separate thread. I should actually remove the loop-guard altogether now as I'm thinking about it, because it's not needed anymore.

In order to not block the UI, I scan them on a separate thread and show "Please wait" and a cancel button in the "Paths to root" panel. All other functionality still works while it tries to find the roots, so you can use the tool while it's scanning the paths. If the user selects a different item, while it's still scanning, I abort the task and start it for the new selected item to have no awful UI responsiveness.

The "Please wait" is also not a great solution, a better solution would have been to update the UI with the paths while it's scanning for further ones.

1 Like

Having not thought about the problem, so perhaps I am being super ignorant here, but isn't "path to root" essentially the same check that Garbage Collector does? Basically a full traversal of object graph?

Is the new Unity profiler not open source? I could not find anything but the old one.

@Skittlebrau : The new Memory Profiler UI is a package and the code therefore in c# and available for your inspection, yes.
you can find the release announcement forum post pinned in the same subforum as this thread and instructions on how to install it in its Documentation.

Ah I found that it's buried in the Library directory. I had no idea and thought the packages we're a binary delivery of DLLs. Just in case anyone else reading this is confused.

1 Like

Did the path to root feature ever get added? (Martin above says that it'd be needed before the memory profiler moved out of preview, but I'm on 1.0.0 and I can't see it).
I've got some leaked objects with a ton of references to each other, and finding the root path is proving awkward (we've just upgraded to 2022.3, so HeapExplorer isn't an option for us any more unfortunately!)

[quote=“Rob-Fireproof”, post:10, topic: 742399]
Did the path to root feature ever get added? (Martin above says that it’d be needed before the memory profiler moved out of preview, but I’m on 1.0.0 and I can’t see it).
No it hasn’t. At least not to the full extent of that. To my own surprise, after we added the References view, building that out to a full Paths To Roots view wasn’t deemed critical enough to hold back the 1.0 release. Also we hit a few snags along the way, so it was deferred to later.

We do have that references view in 1.0, which tries to, on selecting an item, calculate the path to root from that item through everything that’s referencing them. It (mostly) does the job, probably at least as good as the Bitbucket version of Paths To Roots did it (I think it’s the same basic approach too), for relatively trivial reference webs. But for more complex setups it has to stop before it reaches all roots or it’d take too long AND as I’ve recently discovered your system is likely to run out of memory.

And in the interest of elaborating so it doesn’t look like we just shrugged our shoulders and decided it wasn’t worth the effort, I’m going to be a bit more transparent on this particular issue for context:

Did I alreay mention that we hit some snags?
Well, yeah that running out of memory is only the latest of them. But also, while in the rush towards releasing 1.0 we

  • Were hitting all sorts of bugs in our data and algorithms
  • We we’re lacking data like what the Asset Bundled hold on to and what the Scene Roots are (and without these, the references web between Transform components is like a mobile (yep, that baby toy, not the phone), you can pick up any element and it will look like it is the root to all its references)
  • Had general issues with the system.

So we didn’t stop long enough to think about this top down approach from a mathematical angle. Instead we blamed the difficulty of building it on the issues above and hoped that “that next fix” or “that extra data bit” would clear everything up. It didn’t.

I had an inkling that it would be reasonable to go from the roots up to the leaves, but we got the scene roots data relatively late and already lost too much time on the other approach. Then last month I revisited the References as a “quick side effort”, looking for “low hanging fruit” (spoilers: it never is). I was trying to make them a bit clearer e.g. highlighting items were we didn’t dig deeper because they refer to other elements elsewhere in the tree, or because the algorithm ran out of its element count processing limit which is imposed to make sure it ends presents something in a reasonable time frame (or, as it turns out, at all before crashing OOM). Because yeah, right now its hard to impossible to tell if a dead end is a root or what the heck is going on there.

It’s only then that I discovered that the other way around (the one we used so far) is actually mathematically impossible to solve in all cases (in theory) and that you hit that limit way faster in practice than I would have anticipated, I’ve dubbed that the Ancestry Problem (but someone else might have hit this before or invented this or whatever. I’m not a mathematician, but a Designer by training and Programmer (or Technical Designer) by trade so, to some Computer Science/Math M.S.'s this might not be all that surprising?).

For that Ancestry Problem, let’s look at a typical human’s hereditary tree. You’d usually have: 2 parents, 4 grand parents, 8 grand-grand parents, 16 grand-gran
… etc. (oh look, that sequence looks familiar)
If you did that for all the generations that came before you, you’ll reach the point where you theoretically had more ancestors than people alive on the planet at that time, and only a few generations further back your result gets bigger than the total amount of humans that ever lived.

That’s theory. Practically that tree will have convergences, so an algorithm now has to figure out if one of the ancestors it found, was already found elsewhere in the tree. Normally you’d use a hashmap or dictionary for that. But now you have an extra thing to check, which slows things down and takes up more and more memory.

Now, memory references don’t scale to the exponent of 2. You could have, say, a Singleton X with a static array of 1000 references to some 1000 objects of type A, which has an array of 1000 references to another 1000 objects of Type B. Let’s even assume that the arrays on Type A are all copies of the same array (copies, not copies of the references to the same array) so that theoretically there are only 1000 objects of type B. Each instance of B has another such copied array of a 1000 references to 1000 instances of Type C. Now if all of these instances of C reference the same single object of Type D, things start getting tricky.

If you start at D it looks like there are 1000 direct references from objects of type C to it. So you check all of them, each if those have 1000 references from arrays on objects of type B, so each array has 1 reference from a type B objects.

For the record, you’re now at 1000(C) x (1000(B.C[ ]) + 1000(B)) potential entries for your tree view.

Moving on, each of those has 1000 references from Arrays that each have one reference from one of 1000 instances of Type A.

So we are at 1000(C) x (1000(B.C[ ]) + 1000(B)) x (1000(A.B[ ] + 1000(A)) potential entries for your tree view.

Lastly, each of those has instances of A has 1 reference from the Array on the singleton and that array has one reference from the Type X.

So you run up a total potential entires for your tree of
1000(C) x (1000(B.C[ ]) + 1000(B)) x (1000(A.B[ ]) + 1000(A)) x (1000(X.A[ ]) + 1(X)) =

(Note: that’s way more than int.MaxValue and you can’t even create a standard C# array that could hold all of these)

While really you only have 1(D) + 1000(C) + 1000(C[ ]) + 1000(B) + 1000(B[ ]) + 1000(A) + 1(A[ ]) + 1(X) = 5003 objects in memory.

Yes, you can be a a bit smarter about this and:

  • Check a Dictionary for recurring items and instead of giving them a full entry, add an entry in the UI that you can click to go to where else it was in the tree.
  • Once you detect a recurrence, and if that is the only chain of references in a branch, shave off/merge down one branch of that tree, ideally the longer one.

Doing 1. still adds some overhead for reoccurrences, but less, while maintaining clear UI.

Doing 2. seems trivial in this artificial case, but really, you have to walk each branch backwards and (each node of it forwards again) to check that there are no further references to this branch, that might make it more relevant to keep.

For similar reasons it also doesn’t really help to go depth first because you could end up at different roots, with different length of references chains and different “weights” in how binding those roots are (e.g. static roots will normally outlast your scene roots, Scene Objects held by both might loose their native memory on scene unload but leak the managed memory because of the static reference, but if their managed fields reference Assets those stay alive and also keep their native memory, and then there is DontDestroyOnLoad…).

And yes, once you’ve hit the root, pruning the irrelevant branches would become trivial again. But this example, while artificial and therefore (compared to actual, real world, reference trees from internal and user provided snapshots that I inspected) relatively harmless, hopefully illustrates that some times (quite often actually) the nesting is so convoluted that you will just literally run out of RAM (and remember, the snapshot content including it’s entire set of managed heap bytes is also in there) before you even get to the root (and the UI already does part 1 WITHOUT adding entries to redirect you to previously found referees) so you just can’t know which branch is shorter or more relevant than the other.

Also note that for the memory Profiler we already had to add a new, package internal, Native Collection that is long indexed because some snapshots had data entries (particularly for the managed heap and the amount of managed objects on it) with more than int.MaxValue elements and we just couldn’t use existing containers as they are all int indexed.

HOWEVER, if you start from the roots to find all paths and attribute the memory held by each node, my math has that at a comparably trivial O(2n) operation. (Much like if you could do the same for the entire ancestry tree of human kind, but times 2 because you need to attribute backwards, and there are circles in there in the case of memory analysis).

But before we can start on testing that theory some other things need to be done first that are higher prio (including critical bug fixes and higher prio and already under way feature work, also not just for the Memory Profiler) or necessary preparatory work for this to even be reasonable to pursue.

So, annoyingly, I still don’t have an ETA for this, but maybe that context helps explaining the state of things?

Hi Martin,
Thanks for the detailed response! I can totally understand it's a really tough one to solve, and I appreciate the information. It sounds like Heap Explorer's solution might not have been perfect, but it always seemed to do the job whenever we used it - I think this is a case that something that works 90% of the time might be the best solution (but again, I'm probably over-simplifying!)

BTW if you're taking requests for the memory profiler (which I know is a long shot), it'd be lovely to have a public API to interrogate what's in memory. We have a system where we unit-test a bunch of Menu->Level1->Menu->Level2 etc. transitions, then check that when we're back in the menu there's no objects hanging about in memory that don't have the right to be there. I've done it at the moment by changing a lot of "internal" to "public" in the package, but it'd be ace if there was a nice API for it!

Thanks again,

[quote=“Rob-Fireproof”, post:10, topic: 742399]
we’ve just upgraded to 2022.3, so HeapExplorer isn’t an option for us any more unfortunately
Can you elaborate on what problem is there with Heap Explorer in Unity 2022.3?

[quote=“Rob-Fireproof”, post:12, topic: 742399]
It sounds like Heap Explorer’s solution might not have been perfect, but it always seemed to do the job whenever we used it - I think this is a case that something that works 90% of the time might be the best solution (but again, I’m probably over-simplifying!)
Yes, perfectionism can be a trap, but we’ve learned the hard way that a tool that confidently presents inconsistent results has users question all sorts of things that haven’t ever been incorrect, rather than first assuming that there might be truth in there. So data accuracy and trustworthiness in the data the tool represents was a big focus, particularly for the big step of the first verified 1.0 version.

I haven’t looked at it that closely but I think Heap Explorer also has a processing step/time cap on it. From what I’ve seen in exploring the issues in the Memory Profiler package let’s just say that if the Heap Explorer’s Paths Too Roots works 90% of the time for profiling Editor specific Memory, then it’s probably skipping something, or it is magic (or I’m being incredibly dumb about it while only thinking I finally figured it all out, which is totally possible.)

This is no dig at Heap Explorer. I’m very happy it’s there and helpful to people and Peter has also helped us immensely with bug reports and pushing the quality of the features, APIs and data.
I also did a cursory check that it could deal with the API deprecation I did for 2022.2. I didn’t check much of the functionality or later versions but I’m not aware of why it shouldn’t work anymore.

And it might be a fair point that Editor profiling in itself might be a bit too narrow a use case, but it is also one we need to solve for References in general before we can confidently claim something to be the definitive Path(s) To Roots. (Also, the Memory Profiler UI itself has had Memory leaks 4 times now. Doing Memory Management right is tough even if you’re supposed to be the experts on it. We need solid and reliable tooling for this.)

Another fun anecdote there was when we first started to try and reconstruct the Scene Hierarchy. What seemed to work reasonably well at first fell apart sometimes and it was hard to out a finger on it, until we realized how Prefab Assets and their references weren’t taken into consideration by our algos and were tripping them up.

[quote=“Rob-Fireproof”, post:12, topic: 742399]
BTW if you’re taking requests for the memory profiler (which I know is a long shot),
We don’t have any concrete plans for opening up more API but we do take feedback for stuff like that and in general, preferably through the public roadmap page (Profiling Tab) for easier prioritization on our end :slight_smile:

[quote=“Rob-Fireproof”, post:12, topic: 742399]
I’ve done it at the moment by changing a lot of “internal” to “public” in the package, but it’d be ace if there was a nice API for it!
Psst: rename your asmdef to Unity.MemoryProfiler.Editor.Tests and all those internals are available to you as if they were public without needing to modify the package code. But yeah, internal API can change without warning version over version.

[quote=“Peter77”, post:13, topic: 742399]
Can you elaborate on what problem is there with Heap Explorer in Unity 2022.3?
Err, me being an idiot by the looks of it! When we upgraded to 2022 a ton of stuff didn’t compile, and Heap Explorer seemed to be one of them. I removed it assuming that there was some breaking change in Unity and since HE wasn’t active any more, I needed to find another option.

That was all completely wrong though. I just pulled it back into the project and it works fine! Time to revert a bunch of changes…

1 Like

[quote=“Rob-Fireproof”, post:15, topic: 742399]
Err, me being an idiot by the looks of it!
Not necessarily.

Other issues during upgrading might have affected the script updater in auch a way that it might have looked like it was not compatible instead of being Auto-Upgrades.

@Peter77 some adjustments in HE according to the the API deprecation (possibly with #ifdefs) could maybe prevent such confusions.

1 Like

It was actually down to a load of obsolete api warnings that were being treated as errors. I introduced a csc.rsp file next to the HE asmdef and it's all good now (until the API is actually deleted, but I guess that'll be a change in a major version of Unity beyond 2022, so we should be fine for this project).

Btw, im looking to remove the deprecated APIs for 2023.2, it's a bit delayed and might land later. It's possible that HE only uses updatable API that would still have ScriptUpdater info present after the removal but I haven't checked that yet,
but just as a heads-up.


[quote=“Rob-Fireproof”, post:17, topic: 742399]
It was actually down to a load of obsolete api warnings that were being treated as errors.
Huh, I wonder if that combined with Safe Mode could break ScriptUpdater?

Or are there still warnings for it? I had added it as a package via git so that might have hid the warnings.

It was a bit odd - the warnings didn't show up for me locally, but they blew up on our CI tests. (I had to pull the package into the project rather than using git as most people on the team don't have git installed)