I did a memory allocation test. Try allocate a little big memory ( 10000000 * sizeof(int) ~ 40MB) and free it per frame. The main thread cost unreasonable time to do this (~11ms), I think.
I checked the profiler and saw there could be LargeAllocation’s function calls when the memory was larger than some threshold (I will appreciate it if someone tells me the precise value).
Is my way to allocate a big memory correct? Any suggestions are greatly appreciated.
public class AllocateMono : MonoBehaviour
{
void Update()
{
NativeArray<int> array = new NativeArray<int>(10000000, Allocator.Temp);
array.Dispose();
}
}
thank you. This is only a test, and what i want to say is in Unity when allocate a large memory the performance is so bad .In C# or in C++, allocate only 40MB is much faster. Yes, i can allocate in start or not so frequently, but there will still be stuck when do this at any moment.
In C# you generally can not allocate an array without initializing it. As you can see the part that takes the most time is actually the memset of those 40 MB. The actual allocation only took 0.05 ms or 50 µs.
The C++ memset is roughly rated at 30GB/s. which would take about 1.3 ms for a 40MB array. In the case of “std::fill” it would take 23ms. So about 11ms for creating and initializing a native array that can actually be used from the managed side isn’t really that slow. Keep in mind that C++ and C# have very little in common. In C# arrays can not live on the stack like it’s possible in C++, so most comparisons to C++ make no sense.
If you think you can come up with a better implementation, feel free to write your own native code plugin in C++ and see if you get it any faster.
You should generally avoid such large arrays when possible. The allocation of large arrays can take longer and can more easily fail as a continuous block of memory is required. If you really need such large arrays, as bobisgod234 said, you should cache it and only create one array that is large enough…
I used plugins in C++ for some independent modules when meet some performance issues or code management. In this case, only allocate memory in C++ and use it in C# side maybe not so good and not so easy.
In my SLG game, the map has 1200*1200 grids which is a very large size. I tried to test A start algorithm for pathfinding(i know classic A start algorithm will meet cpu calculation and memory usage issue, and just what to see how bad performance it is in this scenario). Since there will many pathfinding jobs (maybe hundreds) be triggered at the same frame, so the classic algorithm will allocate at least 1200 * 1200 * 3 * sizeof(int) * jobs for initializing g/h/previousNode at the beginning and allocate more memory for open list and close list later, then i met this allocating big memory issues as above. Besides algorithm optimization and moving whole A star to pluging in C++, I just what to check if there could be better solution for memory allocation in Unity
That solution is to pre-allocate your memory on launch. You don’t need to literally allocate a new block of ram for your open/close lists. If you are concerned about possibly running out of ram during the game, you can pre-allocate heaps of ram on launch, and just allocate more when needed while game game is running on the off chance that you somehow require more ram than you anticipated.
This should solve all your problems without having to hack dll’s or do anything on the native side.