Hi all,
I’m really in trouble getting the game I’m working on fit inside iPad 1’s memory constraints.
The problem is that I really don’t have that much resources loaded at anytime, and I don’t really understand what could be wrong.
Here are the numbers, computed for the worst scenario and ceiled to nearest MB, where all resources are loaded at the same time, what can never append :
- Total texture usage (taking mip-maps into account, event if most of my textures are not mip-mapped) : 5MB
- Total sound usage : 4MB (but sound files are streamed, so that should not be a problem)
- Application’s binary size : 13 MB
- Total 3D models 100 000 vertex / 100 000 polys (it never appends to have that much polys loaded on a scene) : (let’s assume 7 floats by vertex, 3 ints and 3 floats by polys, add a bunch of overhead) : that should be roughly 6MB
- Video buffers : that’s 10247682*2 for 16 bits double buffering : 3MB
Adding everything it gives me 31MB, very far from the 60MB I’ve heard an application should stay under to fit nicely on that hardware.
I tried using the profiler with the application running on real hardware but it gives me fancy numbers (like no texture loaded, are lots of textures taking no memory, total memory usage above 300MB, where the device only have 256MB … etc).
So here are the questions :
- What is the expected runtime Mono’s VM memory usage (obviously not taking game’s logic into account)
- What obvious point am I missing
- Do you have any tricks to share ?
iPad1 as any 3GS+ device has no ‘fixed amount of RAM’
Its 256MB RAM in total, of which the OS on average eats 130-160mb
This leaves you with 80-120mb of RAM + VRAM on average but thats not granted and reacting to the memory warning is favorable.
As for your calcs:
You miss informations on if the models are animated, if so it will add a lot more than just that.
Also ensure to disable static batching cause static batching will massively blow up the size.
With the numbers you mentioned and assuming that you didn’t enable static batching, I could only think of the following roots without further informations:
- loading textures from outside (independent if WWW, System.IO or asset bundles) as they are 3-5 times as large as resrouce loaded ones.
- usage of asset bundles (independent how loaded as it seems that LoadFromCacheOrDownload does not as advertised decompress it before loading, so you still get the memory hit and load time imapct)
- Some form of ‘decompress on load’ audio (in which case every minute of mp3 can go up to 20mb of ram depending on the audio settings)
- More than 1 compressed audio stream in which case they will fallback to cpu decompression and as such usage of RAM to get the data
To what degree the video will hit in is a thing I can’t comment on but as it streams and needs to buffer I would take it for granted that its definitely not just the backbuffers that are being used there
Thanks, lots of good points,
I have no model animation, everything moves by computing nodes positions/rotations.
I had indeed static batching enabled (I did not enabled dynamic one).
I do not load textures using either of the methods you mentioned, so it should be ok.
I do not use asset bundles.
I do not use decompress on load.
I do have up to two audio streams at a time, since we’re are cross-fading between game menu’s music and in-game one.
As for the video, looking at PowerVR’s documentation, it is my understanding that 3D data streamed to the SGX is hold until render time in it’s internal memory. How it handles overflows I don"t know.
Well, that let me with two excellent actions to be taken to lower memory consumption, thanks again !