What are the plans for the memory profiler?

  1. Do you plan to add an option to export the profiled data into CSV for further custom analysis? I regularly keep track of the memory footprint of my projects in the excel sheet to control the planned memory budget, and retyping the data is error-prone and takes time.
  2. Do you plan to improve the memory tracking so we can better understand what is under “unknown”, “untracked” or “” labels?
  3. What are the general plans for the memory profiler? What new features or improvements can we expect?

Note: I accidentally send the reply before it was ready, here it is now :slight_smile:

Yes, though there is no concrete timeline behind this yet. We’re thinking about what the data should look like when we export it. Ideally we’d also like to expose it via a C# API. Do you have any ideas? What would be important to you?

  • Unknown is a book keeping issue in our native code that needs fixing, i.e. those are allocations that are not associated with a native root. I’ve just recently made investigating them on our side easier, yet haven gotten around to digging deeper into them to get some of those resolved
  • Untracked is something we occasionally find ways to improve on some platforms but there are no concrete plans to improve that wholesale. There are some things that are gonna remain in there for the foreseeable future, like native plugin memory where those plugins do not use our low level API to use Unity’s Memory Manager to allocate their memory in a tracked way. Graphics memory held by the Graphics Driver will also likely remain in there as we have no way of looking inside of it. We just calculate what resources we know it must be handling but if it e.g. retains that memory after those resources are unloaded, they will remain somewhere in Untracked. Unity 6 at least improved this by providing insight into how much of that is resident, which is usually not a lot. There is also one bug I’m handling that would reduce some untracked memory that really should be tracked by us because it relates to managed heap pages that haven’t had any allocations in them for a while and are only reserved address space (and likely non resident).
  • Un-named entries shouldn’t happen and if they do happen on the engine side, please report a bug. More often than that though, those are created by creating new UnityObject instances in user code without providing a name to their .name property. There is not much the Memory Profiler can do about that.

Some things currently planned somewhat concretely without promising any timeline for them are:

  1. Selected item details showing a breakdown of graphics, native and managed memory in line with what the tables show.
  2. Tracing references from managed code to Native Collections/ UnsafeUtility.Malloc allocations, making it easier to analyze Persistent allocations made this way, as long as something outside of the call stack still references them
  3. Improved References view that offers a full and clear list of all Path To Roots (right now it stops looking after processing 2k references and doesn’t clearly label roots as such)
  4. Showing not just the size of an object but also the amount of memory it is responsible for holding in Memory that is part of the objects and native allocations it references.
  5. Faster snapshot opening times past the first time it’s opened / preparing for faster opening by processing snapshots in the background.

That’s a few things we’ve gathered to be important to people, but its a non exhaustive list. Are there any things you’d like us to focus on beyond the mentioned CSV export?

2 Likes

Couldn’t that be labeled with something like “unnamed GameObject with instanceID foo in scene bar” so it’s clearer to the user what’s going on?

Thanks for the extensive response! It’s really helpful and reassuring!

I love the option to have an exposed C# API. I would love to use it to create custom tools for controlling the memory budget. For example, QA could play the game with a custom profiler module attached that would notify them when certain memory budgets were exceeded (ex., warning when certain textures exceed 600MB, error after 800MB). QA could make a snapshot and then perform an automated project-tailored analysis. It would be fantastic to have.

For the CSV, this format would be enough for me to make my work easier. Example data from the All of Memory tab:

It could be converted into this. More or less, what we see in profiler is what we get in CSV, but with the option to convert all the data or limit to a certain depth:

“Depth” can be interpreted as indentation, so based on this CSV data, it should be possible to reconstruct the profiler window completely. This would allow me to track the memory budget easily over time in my projects. Right now, I’m scrapping the data using some OCR or rewriting it in some cases, which is a painful process.

Providing detailed information about each asset in CSV would be useful. I once wrote a simple profiler based on Resources and Profiler API that outputs asset details in a single cell, so an additional column was dedicated to that. Texture, for example, was described as “1024x1024 ASTC4x4 mips read/write”. It was already enough for me to quickly diagnose the issues with a format like this, even if each asset had a different description format.

I’m very happy with the planned changes. I am looking forward to that! Thanks!

1 Like

For Scene Objects, that latter part would definitely be an option. Though if they just got created, they just end up in the current active scene. In terms of the roadmap I mentioned, that’d also really come together between items 3 and 4.

However, this tends to happen more with Assets and those don’t live in a scene. The Instance ID is kinda fleeting and only useful within the current instance of the Player/Editor run. When selecting them, that info is already in the Selected Item details on the right. We also show instance IDs when comparing snapshots from the same session because then that is actually meaningful information for matching up items between captures. It can lead to some more confusion though.

For Managed Objects we eventually opted to list them by their address value, which isn’t particularly nice looking, but unless they are managed shell objects to a still existing native object, they don’t have a name.

So, maybe we do need to list the unnamed ones in a way as you suggested in a similar fashion…

Oh and, also worth noting that there were some graphics resources for a while, that were not related to Native Objects. That was a bug we since fixed for some buffers that got allocated for the Gfx Driver to retain once the memory of the object they were originally allocated for was no longer used. They now have slightly more useful names.

1 Like

Thanks for that example and your additional thought on this. I had been mulling over different ways to deal with the nested nature of the tables and the information, since CSV doesn’t really have a system for grouping items together. A path (with escaped path delimiters if they are used in an Object Name or Mapped file) and a depth count column is also the best I could come up with yet.

(I know my bank already doesn’t really care what the CSV standard says but I do think trying to at least somehow keep to a standard that a standard spreadsheet tool would be happy to load in a somewhat reasonable format without requiring custom tooling to preprocess it has its benefits…)

Asset details are another issue indeed. And references :confused: … Might be a case of perfect being the enemy of done well enough here though.

Tbh, I would never expect to have a data about references in the CSV. I imagine that CSV would be nice just for general memory budget tracking. It would be silly to track memory leaks using that (I use reference tracking just to find why something is not unloaded).

1 Like