OK Mono uses Just in Time (JIT) compilation to run on most platforms. But on IOS Ahead of Time (AOT)compilation is used (apple does not allow dynamic code presumably for security and performance).
So could Unity and Mono not benefit from full AOT compilation with optimisations.
Then we wouldn’t have to go back to C++.
But we would have to have a decent garbage collection system that does not get in the way of our games.
All AOT does is reduce the up front overhead of JIT… and there’s no wait it should be full AOT. AOT on iOS currently is already aweful and causes issues.
I’m sure there are. I should have been keeping a running total of all of the workarounds I’ve had to use with JSON .NET to make it work with AOT. Most of them center around Generics and collections… weird crap like GetEnumerator calls returning a string instead of an actual enumerator and Generic interfaces blowing up at random. It’s also a performance tradeoff… the games would start faster but you lose a bit of realtime performance as the compilation is happening dynamically while it’s running.
JIT can do some optimizations that AOT can’t, so it’s not the case that AOT is automatically superior. Also, it doesn’t really reduce startup time since that was already insignificant (source).
We don’t have to “go back to C++”. Please cite a reference that says we do.
OK maybe I’m confusing AOT compilation and native code aren’t they the same?
You do understand that I just want Unity to be as fast as possible so I can throw lots of game objects around and not have to worry about GC hiccups, draw calls all that boring stuff that just gets in the way.
And have you heard about Microsofts Project N language natively compiled built for multi-core systems. They are using it on their mobile platforms to increase performance (see link below).
So it sounds like AOT can sometimes out optimise JIT, and in other cases JIT can out optimise AOT. The only way to know would be to have the option in Unity to built to JIT or AOT and test the results.
Or be able to build flexibly so that areas that benefit from JIT are left and other areas of code are built to AOT, then you can have the best of both worlds.
JIT also results in native code. There are many ways to compile things; everything is native code in the end. Being native code doesn’t have much to do with how well it runs…if it’s badly optimized, it won’t run very well.
As long as computers don’t have infinite speed, you will need to pay attention to how to best use the architecture; there’s really no way around that. Are you actually making a game, or just coming up with things to worry about instead of being productive?
The only reason I can see AOT being useful is stripping the source code from the build. Currently, with JIT you pretty much have the full source for any game (and disguising this only helps a little).
I think we would be surprised at how many .Net developers (and everyone else) don’t realize that it IS compiled to native code…after the JIT process. It’s not interpreted like days of yore.
It may be compounded because scripting does tend to be interpreted, though not always (Javascript in web browsers these days doesn’t run like it used to, for example). So when people hear about “scripting” in Unity they kind of assume interpreted code.
I’d have thought that JIT would have some pretty significant advantages over AOT when it comes to cross-platform development, as it can have compile-time optimizations per-platform without having to include a whole bunch of pre-compiled per-platform binaries.
So what we need is a FJIT compiler, first play through it runs a just in time compiler for that platform, but it saves the native code for subsequent plays.
Then you can take advantage of platform specific platform optimisations e.g. Intel or Amd chipset, Nvidia or Amd gpu, specific flavour of ARM processor on Android.
But really what we want is native optimised code for every platform, and I would think that the C++ compiler can seriously outperform a JIT compiler as it can spend far longer analysing the code. All we need is a C# to native compiler and we could have the best of both worlds.
Actually I wonder if natively compiled code could be analysed for memory allocations and destructions and allow for a much smoother and faster GC.
Which is exactly what a JIT does*. It takes platform-independent bytecode and compiles it to patform-specific native code.
None of which has anything to do with GPUs.
Also, saving the output for subsequent runs would also nullify some of the other benefits of JIT compilation, as well as adding its own complications (Where does it go? How is it loaded back in? Why do you assume that would necessarily be faster than the already-super-fast JIT process?). For instance, while I don’t know if Mono does this in particular, one of the advantages of JIT compilation is the ability for parts of the code to be re-compiled on the fly to suit not just the processor but also usage patterns.
Seriously, the people and organisations who work on this stuff have put a truck load of time and effort and research money into it. If there were simple solutions that actually had real-world benefits I’m 99.999% certain they’d have been applied by now. The systems already in place are pretty darn good.
Also, you’re far better off writing better code than thinking of ways for processors/compilers to get better (which is something that they’re constantly doing anyway).
Mileage of the optimization may vary. But in any case it’s always native.
Actually that’s just because we have to write shaders and pass data to them at the moment, but new language add-ons and new languages allow for CPU / GPU programming.
OK but then why is C# JIT code still about twice as slow as C++ code?