Is the below error indicate a memory leak?
On a rare occasion my stand-alone crashes in a particular place, it hangs as one particular level is loading after another.
My game is a series of about 25 small levels, if that tells you anything…
How can I debug this? This is all I have from output_log.txt:
DynamicHeapAllocator allocation probe 1 failed - Could not get memory for large allocation 268435456.
DynamicHeapAllocator allocation probe 2 failed - Could not get memory for large allocation 268435456.
DynamicHeapAllocator allocation probe 3 failed - Could not get memory for large allocation 268435456.
DynamicHeapAllocator allocation probe 4 failed - Could not get memory for large allocation 268435456.
DynamicHeapAllocator out of memory - Could not get memory for large allocation 268435456!
Could not allocate memory: System out of memory!
Trying to allocate: 268435456B with 16 alignment. MemoryLabel: GfxDevice
Allocation happend at: Line:62 in
Memory overview
Open Task Manager and run the game again, keep an eye on its memory usage and see what happens.
Note that just because you still have system RAM available doesn’t mean that your game can access it. If you’ve built in 32bit mode then your app is limited to 32 bits of address space. This is likely to be relevant if this happens in your build but not in your Editor.
If that is the case, don’t just switch your game to 64bit and build it again. Look at what resources it’s using and try to cut down memory usage.
You two are exactly right. I didn’t realize I had switched to ‘x86’ yesterday, which caused my scene (that I haven’t changed in a while) to be a new problem. There is no crash at x86_64.
*I should indeed clean up the instantiates I’m doing on Start(), as I feel this situation exposed that I have some sloppy code here.
The crash only happens if I play through all the previous scenes before loading the “problem scene”. I guess this is because enough memory is already busy, that when I load “problem scene” it’s too much?
I’m expecting my players to have a decent computer - that’s not too old - for my 3D game. Therefore I can always build at x86_64?
268435456 bytes just happens to be exactly 256 MB. So it sounds like you’re trying to load something that’s 256 MB into memory, or several somethings that are 256 MB each. Look around for anything in your project that’s exactly 256 MB and that might be a clue. For example, a 3D array of int’s in your code that’s 256 * 1024 *1024, or a 16-bit texture that’s 4096 * 4096 pixels? 256 MB isn’t a ton of memory but it adds up quick if you keep doing it without freeing any memory… do it 8 times and you’ll hit 2 GB, which is where an x86 app will start to get unstable. You should look at the Unity Profiler (in the Unity menu, Windows → Profiler) and run your game and look at the memory section to see where it increases when loading scenes.
Make sure you unload the previous scenes before loading the new scene. If you’re just using LoadScene it should do that automatically, but if you’re using LoadSceneAdditive or you’ve got singletons or things marked as DontDestroyOnLoad then stuff from previous scenes would still be sitting around in memory.
Hard to say… your players would have to have a 64-bit version of Windows installed in order to access more than 4 GB of memory, and a lot of people don’t really know that, so they might have expensive computers with 16 GB of memory but they’re running some old version of Windows XP and don’t realize that all that memory is just sitting there not being used. It’s probably a good idea to figure out where your memory’s going regardless, memory leaks are bad, and all having an expensive computer does is push back the amount of time someone can play before the game crashes.
Memory leak never happens in C#/JavaScript (unless you are using C++), as C#/JavaScript already has a garbage collector that properly allocates memory. It would probably be some kind of memory problem with your computer. (e.g., running out of memory)
First, adding more memery will only dekay the issue, and anyone with only 4 gigs of RAM won’t be much better off than if it was a 32bit build anyway.
Second, while it’s not technically a “leak” it does sound like you’re holding onto stuff that you shouldn’t. (A “leak” is when you drop the pointer and therefore can’t free it. C# handles that, but…) But it can’t stop you from just holding onto all of your old stuff.
For context, take a look at PS3/Xbox 360 games. They have 512mb of total system and video RAM. Just don’t expect to match their efficiency - you won’t!
I’m learning a lot from all of you. And I found a set of textures from an asset store pack that were much too big. I slimmed those down a lot, as well as other scattered oversights. That was needed either way.
So I’m disappointed that the problem is still there. I’m slow to debug this because I have to play it for 5 minutes before I know if it crashes.
I’m not finding the profiler helpful for this task. I wonder what I’m misunderstanding about the profiler. In the past I’ve found it enormously useful for catching inefficient code or instantiating too much per time – with the “CPU usage” profiler.
If you consider that I have to play for 5 minutes to get the crash, how would you use the profiler to look for issues? The “memory profiler” does show some scenes get up to 400MB for the second metric - “Unity”. And next the value drops with my modest scenes.
I’m careful with my 5 public static vars, and using DontDestroyOnLoad.
I load scenes simply, with LoadScene().
Well, what are your static variables? (Public is irrelevant.)
I’ve not debugged this type of issue before, but I’d look at the memory profiler during a scene load and see what has and hasn’t changed. Maybe a screenshot?
I would also play the game in the Editor and look at the Hierarchy for any objects that either get duplicated or don’t go away on scene load. On new versions of Unity anything marked DontDestroyOnLoad shows under its own sub-scene in there, which is pretty handy. Does that exist, and do things get added to it on scene load?
You all were right. The crash in 32-bit showed that I had problems that needed to be addressed even though it didn’t crash in 64-bit. There were 4 problems:
My crash scene has asset store models with textures that were silly big. I should have looked at them more carefully.
This crash scene was late in the game, so issues with prior scenes were building to an inevitable choke point. And about those prior scenes…
I’m using a wonderful lightning particle asset store pack that – it seems – can’t be used much at its highest settings. When the case, these particles make the memory climb and never stop. Adjusting those settings seems to do the trick.
Without the ‘Destroy()’ inserted in the below code, this ends up being a ‘material leak’ for a lack of a better term.
Material[] mats = objectMeshToHighlight.GetComponent<SkinnedMeshRenderer>().materials;
Destroy(mats[0]);
mats[0] = highlightMaterialFast;
// more code here that selects what kind of ‘glow’ material is around the character
objectMeshToHighlight.GetComponent<SkinnedMeshRenderer>().materials = mats;
Am I right though about this? – Watching the memory profiler in editor, when I move from an intense scene to a lite scene, the memory use drops impressively.
But when watching the task manager for a stand-alone build, that drop-off is much less significant. My early scenes push the memory use upward significantly, and when I get to later scenes (including my big, previously problematic one) memory goes up only a little.
This seems functional; if it gets a bit greedy, it hold on to memory for later.
This is a common Unity error… when you call renderer.material or renderer.materials, Unity actually automatically creates a new copy of the material or all materials specifically for that renderer, because it thinks you want to change something on the material just for that one object without affecting the others. If you want to change something on the material on all objects using it, OR if you just want to swap one of the materials for a new material on one object like it looks like you’re doing, you should use renderer.sharedMaterial or renderer.sharedMaterials. Then you can get rid of that Destroy(mats[0]) line, and will avoid bothering the garbage collector. See the note here:
That’s probably fine. That’s the way the garbage collector works; if you’re using a bunch of memory and then you stop using it, it doesn’t immediately get freed. The garbage collector waits around until you actually need some of that memory and then frees it, otherwise it just sits there. It’s usually a good thing, since freeing memory by collecting garbage can make your framerate lag for a bit so you don’t want to do it too often if you don’t need the memory.
When the OS allocates new memory to an application it gets added to the end of the application’s address space. (This is an oversimplification, but it’ll suffice as an explanation.) But when you free memory it could come from anywhere in the app’s address space. So while that memory may now be available for the application to re-use for something else, the OS can’t use it for something else because it’s still in the middle of a larger chunk that’s in use.
There’s also other stuff that can impact this, like the OS’s general strategy for memory management.