1

I'm just not sure as to why JIT (Just-in-time) and AOT (Ahead-of-time) are often presented in contradiction to another.

If we do not care about about portability, it feels to me that a program could very well be AOT compiled and then, at runtime the JIT could be used to re-optimized the hot parts.

What are some known implementations using this scheme. If there are none, why so?

jeremie
  • 153
  • 4
  • 1
    FWIW I didn't recognize AOT as a term. So for reference JIT = "Just In Time" and AOT = "Ahead Of Time". – Peter M Mar 05 '20 at 16:10
  • 1
    Depends on how closely you define what compilation in AOT means. I would argue Java and C# are doing what you describe already: the source is compiled to close-to-but-still-independent from machine code by the developer and then JITed where beneficial. It's just that the first compilation only goes 60-70% of the way from Human to machine readable, not 100. – marstato Mar 05 '20 at 16:35
  • 1
    Also Note that static, source level information is much more important for optimization than empirical runtime data. The more of that source-only information you compile away in the AOT step, the more intelligent the JIT needs to be in re-gathrring it empirically. There is a balance to be struck and many, many attempts have been made. – marstato Mar 05 '20 at 16:55
  • 2
    "Yes / No" questions and especially "Is this possible" questions are generally frowned upon on Stack Exchange. They only allow two possible answers ("Yes" or "No"), and neither of those answers is actually useful. What you should do instead, is simply *assume* that it is possible, start implementing whatever it is that you are implementing, and when you run into a problem, *then* you ask a *focused* question. See https://softwareengineering.meta.stackexchange.com/a/7274/1352 for details. Just as an example: the answer to your question is "Yes, of course it is possible". How does that help? – Jörg W Mittag Mar 05 '20 at 17:38
  • @jeremie Short answer: Yes, it's possible and is used. I'm not sure I'd say they're presented in "contradiction" to one another, but more so presented as "distinct" since, inherently, they describe distinct time periods of when a certain compile process happens. By definition, a compile process can't be both AOT and JIT at the same time, but that doesn't stop them from being done in succession as is the case with Java and C# as mentioned by marstato. – xtratic Mar 05 '20 at 18:40

3 Answers3

6

You're talking about profile-guided optimization. The Scala native AOT compilers I've seen employ PGO do so in two stages. Basically, the first pass instruments the code to generate a profile file during runtime. You then run that version through a test suite that exercises the typical use cases of your product. Then that profile file is used as an input to a second build pass that actually applies the optimization to generate a native executable.

By pregenerating the runtime profile at build time, you lose the potential benefits of optimizations based on unusual usage patterns of your code, but you also lose the runtime profiling overhead. I've seen this demoed at conferences and don't remember the exact numbers, but the run time was comparable and there was an impressive RAM and startup time reduction over the HotSpot JVM which does essentially the same PGO in a JIT manner.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
2

You seem to be missing a critical aspect of JIT: platform independence. As soon as you compile down to machine code AOT, your program isn't platform-independent anymore.

That said, the .NET ecosystem has provided this capability for quite some time now. You can read about that in this article.

I'm not sure what you mean by re-optimizing hot spots using JIT. By definition, once you compile AOT, your program is, well, already compiled.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • My understanding is that JIT is also taking advantage of some runtime information to better optimized the compiled code. – jeremie Mar 05 '20 at 16:02
  • I would imagine that refers to the startup delay inherent in JIT. That problem doesn't exist for AOT. See https://en.wikipedia.org/wiki/Just-in-time_compilation#Startup_delay_and_optimizations – Robert Harvey Mar 05 '20 at 16:06
  • I was more thinking of things like removing unused code branches, or replacing some variables by constants. Basically some information that the JIT can use but that would not be available to use at AOT time – jeremie Mar 05 '20 at 16:16
  • Back in the 90's I worked with a Windows NT system that ran Dec Alpha processors, but ran Intel AOT compiled applications. There was a technology back then that would run JIT and would optimize the code to native code. I can't remember the exact details or easily google them - so my actual recollections may be a bit hazy. – Peter M Mar 05 '20 at 16:20
  • @PeterM: I also can't remember the (code)name of that, but when Intel absorbed the leftovers of DEC, that technology became the basis of the x86 emulation support for IA-64 Windows and Linux. You might also be interested in HP's Dynamo project. It was a dynamic translator (just another name for JIT compiler) that was supposed to translate HP PA-RISC machine code to IA-64 machine code to allow HP to change the CPU architecture of their mainframes without customers having to re-compile their code. Since IA-64 was just a piece of paper at that point, they did performance testing by writing a … – Jörg W Mittag Mar 07 '20 at 10:47
  • … HP PA-RISC backend for Dynamo. Since this *should* have been essentially a no-op, they thought that in addition to validating the basic functionality, this would also allow them to measure the overhead, i.e. the slowdown caused by having to parse and analyze machine code, make sense of it, then compile it again. Much to their surprise *the overhead was negative*, IOW, the code actually executed *faster* when run under Dynamo than natively. (Obviously, any Smalltalk or Lisp programmer would have said "Told you so", but those C, C++ and assembler guys were caught off guard.) – Jörg W Mittag Mar 07 '20 at 10:52
1

Yes. It's possible and quite common for a program to be recompiled three times:

  1. At the developer's machine, the code is compiled to an intermediate language code. This is a form of the code that's faster to parse than the source code, but is targeted for a generic virtual machine or a generic version of the target architecture. The purpose of the compilation here is mainly for portability rather than optimisation, but there are many simple optimisations that can happen at this step, like dead code elimination, constant folding, method inlining, etc.

  2. When the program is installed or loaded for the first time, the intermediate language code can be compiled to the machine's architecture native code. This compilation step allows the program to take advantage of the knowledge of the particular instruction set that it is running on. Rather than compiling for a generic architecture, like "most x86 CPUs", compiling at installation time allows the compiler to take advantage of the knowledge that it is running on an "AMD Ryzen 3rd Gen Threadripper 3960X", which means the program can take advantage of the latest CPU instruction set for that particular machine, without having to worry about making the code unsuitable in other x86 machines.

  3. At runtime, a JIT may trace that the execution path of the code to find out that the file the user is working on doesn't use feature Foo, so that means that a hot loop that calls method A, it is always followed by calls to method C, D, and H and there is a number of if-condition in those methods that have never been observed to be true as they only ever is needed for files that uses Foo, so let's recompile C, D, and H into a new method C_D_H that removes those if-blocks and put a guard up in method A to return to the old code if suddenly the user opened another file that do use feature Foo. JIT compilation has the best vantage point to produce the fastest code, but it's also a risky place because they need to instrument the code to trace where the code is going, and they need to do these analysis and recompilations in real time, which mean they have the potential to actually slow down execution compared to just running the unoptimised code.

Languages that often does take advantage of all three compilation is JavaScript, Web Assembly, and Android.

Lie Ryan
  • 12,291
  • 1
  • 30
  • 41