6

With Android 5.0, Google has introduced the Android Runtime, or ART. ART "brings improvements in performance, garbage collection, applications debugging and profiling." However, it also replaces Dalvik's Just-in-Time compilation with Ahead-of-Time compilation performed by translating the Dalvik bytecode to a native ELF executable at application install time.

My question is: what were Google's technical reasons for making this choice and why is it more performant? I've long been an advocate of native code over managed code, but even so, I thought that JIT'd code could outperform AOT-compiled code in many or perhaps even most situations due to its ability to re-optimize for actual application behavior at runtime.

Are the advantages of AOT specific to Android's implementation or environment (i.e. mobile devices with ARM CPUs and limited RAM), or was JIT just oversold?

Mitch Lindgren
  • 454
  • 3
  • 10
  • recommended reading: **[Why we're not customer support for \[your favorite company\]](http://meta.stackoverflow.com/a/255746/839601)** – gnat Feb 27 '15 at 07:27
  • 2
    In what way is this a "customer service" type question? – Mitch Lindgren Feb 27 '15 at 07:30
  • "why did Google do this" – gnat Feb 27 '15 at 07:31
  • 6
    I think he's interested in the technical reasoning behind choosing AOT over JIT, not just "why did Google change this". You should change the wording of your question to reflect that, though. – Ivo Coumans Feb 27 '15 at 07:32
  • 3
    @gnat I am asking what their technical reasons for switching to AOT compilation were; furthermore, that is only one part of the question. I can change my phrasing if you would prefer but I strongly disagree that this is in any way the kind of question that would be more appropriate for customer service. – Mitch Lindgren Feb 27 '15 at 07:33
  • 2
    Why point out Google when Microsoft is getting in on the AoT bandwagon with their ['.NET Native'](https://msdn.microsoft.com/en-us/vstudio/dotnetnative.aspx) compile tools! – gbjbaanb Feb 27 '15 at 15:17
  • @gbjbaanb Because I wasn't aware of that. – Mitch Lindgren Feb 27 '15 at 20:09
  • 1
    Java 9 also has an AoT compiler https://www.infoworld.com/article/3192105/java/java-9s-aot-compiler-use-at-your-own-risk.html – phuclv Jan 18 '18 at 17:23

1 Answers1

16

Compilers need two things to generate performant code: information and resources.

JIT compilers have way more information at their disposal than AOT compilers. Static analysis is impossible in the general case (nearly everything interesting you would want to know about a program can be reduced to either the Halting Problem or Rice's Theorem), and hard even in the special case. JIT compilers don't have this problem: they don't have to statically analyze the program, they can observe it dynamically at runtime.

Plus, a JIT compiler has techniques at its disposal that AOT compilers don't, the most important one being de-optimization. Now, you might think, we is de-optimization important for performance? Well, if you can de-optimize, then you can be over-aggressive in making optimizations that are actually invalid (like inlining a method call that may or may not be polymorphic), and if it turns out that you are wrong, you can then de-optimize back to the un-inlined case (for example).

However, there's the problem of resources: an AOT compiler can take as much time as it wants, and use as much memory as it wants. A JIT compiler has to steal its resources away from the very program that the user wants to use right now.

Normally, that is not a problem. Our today's machines are so ridiculously overpowered, that there are always enough resources at the JIT's disposal. Especially since the JIT will use the most resources when a lot of new code is introduced into the system at once, which is usually during program startup, or when the program transitions between phases (for example, from parsing the configuration files to setting up the object graph, or from finishing configuration to starting the actual work), at which times the program itself typically does not yet use that many resources (especially during program startup). The Azul JCA is a good example. It has 864 cores and 768 GiByte RAM in its biggest configuration (and note that they haven't been sold for quite some time, so that is actually several years old technology). According to Azul's measurements, the JIT uses maybe 50 cores, when it is working very hard. That's still more than 800 cores leftover for the program, the system and the GC.

But your typical Android device doesn't have 1000 cores and a TiByte of RAM. And it is extremely interactive and latency-sensitive, when the user starts, say, WhatsApp, he wants to write a message right now. Not in 500msec, when the JIT has warmed up. NOW.

That's what makes AOT attractive here. Also note that JIT compiling will not only steal resources away from the running program, it will also need battery power, and it will need that every time the program runs, whereas an AOT compiler will only need to spend that power budget once, when the app is installed.

You could go even more extreme and push the compilation off to the app store or even to the developer, like Apple does, but Apple has the advantage of a much more limited set of possible target platforms to consider, so on-device AOT compilation seems a reasonable trade-off for Android.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
  • 3
    +1. There's a middle ground between AOT and JIT; compiling the program so that it gathers and saves runtime data and periodically recompiling it ahead of time using that data. Of course that has the complication of having all the sources (or bytecode/IR/whatever), libraries and tools needed for the recompile available even after the program's installed, and deciding when's a good time to recompile. For the use case of a mobile device it's still a lot simpler to just compile it once and leave it up to the developer to not do anything grossly inefficient. – Doval Feb 27 '15 at 14:08
  • 3
    @Doval: You're right, PGO gives you some of the "adaptive" stuff that JITs can do, but it won't give you the "speculative" stuff, which needs the capability to de-optimize. Note that not all JITs do this. The CLR JIT (and I believe Mono also) only ever compiles code once, before it is run, at load time. (After all, NGen, is essentially just running the JIT.) So, it doesn't do any adaptive or speculative optimizations, either, and yet is still pretty darn good at producing high-performance code. – Jörg W Mittag Feb 27 '15 at 14:19
  • 3
    @JörgWMittag IIRC the CLR JIT compiles method-by-method as accessed for the first time and can throw away code too - this is why they created the NGen tool that does the full compile up front that not only compiles everything but also stores it in sharable pages on disk (JIT cache is in memory). I think a lot of the high-performance of today's code is mostly due to the incredible speed of today's CPUs, JIT was not a good option only 10-15 years ago when we didn't even have multi-core CPUs. I mean, Java really was slow back then but you don't hear that now... – gbjbaanb Feb 27 '15 at 15:16
  • 2
    @gbjbaanb: Java didn't have good JITs (or even JITs at all) in the beginning. It wasn't until the Java community started to absorb technology from Smalltalk and Lisp that Java became fast. Ironically, modern high-perfomance JVMs don't actually make much use of the staticness of Java, e.g. HotSpot is just a slightly modified Smalltalk VM (it is derived from the Animorphic Smalltalk VM, which is also the precursor to V8). When the Self VM (which is in turn the precursor to the Animorphic Smalltalk VM) came out, it actually competed with and even beat the available C++ compilers of the time. – Jörg W Mittag Feb 27 '15 at 15:20
  • 1
    The Animorphic Smalltalk VM was designed to run fast on 386 with (I think) 4 MiByte of RAM. What those multicore CPUs allow us is to essentially ignore the fact that the JIT steals its resources from the program, because there's just *so many* resources available that it doesn't matter. – Jörg W Mittag Feb 27 '15 at 15:22
  • 1
    @JörgWMittag Exactly. In the very early days Java was interpreted... but I don't think the JIT compilers for Java have improved so significantly over time, so the speedups have come from ever faster and multi-core CPUs. And we still have benchmarks that say X is better than C, benchmarks... pah! – gbjbaanb Feb 27 '15 at 15:23