I was wondering whether every interpreted language can be compiled? And can every compiled language be interpreted?
-
1**[Unclear what help you need](http://meta.programmers.stackexchange.com/questions/6559/why-is-research-important).** Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it’s hard to tell what problem you are trying to solve or what aspect of your approach needs to be corrected or explained. See the [ask] page for help clarifying this question. – gnat Nov 10 '14 at 04:51
-
1@gnat OP is trying to know whether interpreted languages can be compiled, and compiled languages can be interpreted. IOW, whether the dichotomy between the two (which is widely taught if not hokey) can be crossed. Very clear, I'm not sure why you think otherwise. – djechlin Nov 10 '14 at 05:14
-
2@djechlin this has been asked here many times before, see eg [Interpreted vs Compiled: A useful distinction?](http://programmers.stackexchange.com/questions/136993/interpreted-vs-compiled-a-useful-distinction) and _multiple_ questions shown in linked and related sections there. It is [unclear](http://meta.programmers.stackexchange.com/questions/6483/why-was-my-question-closed-or-down-voted/6489#6489) what is OP missing in answers that are already given – gnat Nov 10 '14 at 06:13
-
@gnat Thank you for pointing to another thread. I think my question is more specific. – wannik Nov 10 '14 at 06:43
-
First of all, most "useful and sensible" questions can be rendered nonsense or unprovable by qualifying with the word "every". – rwong Nov 10 '14 at 06:48
-
1Assuming that by "compiled language" you refer to "a mode of program execution by compiling the source code into the machine code for use with the target CPU's architecture, and then executed there", there are several sub-categories: AOT (ahead-of-time), JIT (just-in-time), or IL (intermediate language). Statically-compiled binaries may be yet another one, but some may consider it to be same as AOT. – rwong Nov 10 '14 at 06:51
-
Machine-code binaries can be executed in an emulator. The best known example is [QEMU](http://en.wikipedia.org/wiki/QEMU), because it is able to run binaries compiled for one CPU architecture on top of a different CPU architecture - a clear sign that the original machine code is not executed in its original form. Whether total CPU emulation shall count as "interpreted", "binary-translated" or "dynamically executed" is [up to you](http://en.wikipedia.org/wiki/Humpty_Dumpty). – rwong Nov 10 '14 at 06:55
-
@rwong I don't understand you first comment. – wannik Nov 10 '14 at 06:56
-
Some language features such as dynamic code generation (often confused with reflection; no, they're different) cannot be statically compiled. – rwong Nov 10 '14 at 06:57
-
@wannik: See "all swans are white" fallacy, in [Falsifiability](http://en.wikipedia.org/wiki/Falsifiability) – rwong Nov 10 '14 at 06:58
-
@rwong Then, the question "can InterpretedLanguageXWhichHasNoCompilerNow be compiled?" is nonsense given that in the future there may be some techniques that can make it compiled. – wannik Nov 10 '14 at 07:02
-
A suggested change of wording: Are there languages for which there are technical hurdles which make it unacceptably inconvenient to be both compiled and interpreted in different settings. – Cort Ammon Nov 10 '14 at 07:04
-
@gnat that's called a duplicate. vote to close as duplicate. – djechlin Nov 10 '14 at 07:17
-
@CortAmmon rewording like this would make it software recommendation, which is explicitly off-topic per [help/on-topic]. See http://meta.programmers.stackexchange.com/questions/6483/why-was-my-question-closed-or-down-voted/6487#6487 – gnat Nov 10 '14 at 07:19
-
@gnat:That "no recommendations" meta-post seems to be a bit different than what I'm looking at. When you look at what "compiling" and "interpreting" actually mean, they are tremendously blurry with no clear line between them. However, every language makes design choices. Every language makes some of their tradeoffs based on whether they want to be thought of as "compiled" or "interpreted." If the question cannot be answered in terms of inconvenient tradeoffs, then it becomes an opinion question deciding if a language is compiled or interpreted because every language has bits of both. – Cort Ammon Nov 10 '14 at 07:25
4 Answers
Every language possible must support both compilation and interpretation by definition
The trick is in the underlying meanings of the idea of "compilation" and "interpretation."
Interpretation is "take this program written in language X, and create a process which applies the rules of language X to the program." It's a fancy way of saying "interpretation is running the program."
Compiling is take this program written in language X, and write a new program in language Y which, if interpreted, would create the same results as interpreting the program in X directly. It is a mapping operation from one language to another. It's a fancy way of saying "take some time to make a program that, at a later time, would do what I wrote, but faster."
Every language can be mapped to another language. If not, then the language cannot really run on computers. Thus every language can technically be compiled. And, since any compiled program can be written in the form "interpret the act of compiling the program, then interpret the result," every program can be interpreted as well.
The line is even fuzzy with assembly. One assumes that x86 code is interpreted, because the job of a CPU is to interpret your instructions. However, in reality, x86 is too slow if interpreted literally. Instead, every modern CPU will compile x86 into its own "native" microcode. This means that literally every "interpreted" language gets compiled at some point.
Some languages are hard to compile or interpret
Some languages do not lend themselves well to compiling or interpreting. One expectation of a "good" language is speed. Slow languages tend to be less desirable than fast languages. C++ has rules which take advantage of the fact that it is traditionally compiled rather than interpreting. It has very slow rules (such as function overloading resolution) which compilers can bake down into very fast assembly (with simple fast rules). A naiive C++ interpreter would spend a great deal of time repeatedly doing these slow rules.
Likewise, some languages do not lend themselves well to compiling. Python, for instance, relies on duck-typing for everything. These rules are much faster than C++'s compiling rules, but slower than assembly's rules. Compiling Python wouldn't buy you much because you couldn't map Python into anything much simpler.
... or could you?
If you look at PyPy, Psyco, or IronPython, there is an effort to compile Python. They do techniques known as Just in Time Compiling (JIT) to find bits of Python which do reward compiling. They compile those parts, while interpreting parts which are hard to compile. In many commonly-occuring cases, the result of using IronPython to JIT compile Python code is as fast or faster than C++ or C#!
This shows just how blurry the line between compiled and interpreted actually is.

- 10,840
- 3
- 23
- 32
-
2"... JIT ... Python code is as fast or faster than C++ or C#" - any links to proof? Thanks. – Den Nov 10 '14 at 09:28
-
I will have to take a look for links for you. The cases are those where memory management is a large portion of the algorithm or where JIT can use processor-specific functionality, but you have to compile to the lowest-common denominator. I intentionally worded that sentence to point out that not every program will run faster, but in a lot of cases, it does. – Cort Ammon Nov 10 '14 at 14:50
-
A general case for JIT being faster is algorithms where it is infeasible to optimize ahead of time because too little is known of the run-time program state where it is called, but when you actually call the function, you are now in a good position to optimize. For example, it is hard to make a library which efficiently sorts any C++ container (dynamic libraries make C++ templates difficult). However, it is easy to write a python program which does so, and let the JIT compiler write a custom optimized version for every container type. – Cort Ammon Nov 10 '14 at 14:57
-
If I may widen your question to all JIT'd languages, there is another really interesting example. Apple uses LLVM (somewhere between compiling and JIT) in their openGL stack. Usually one needs `if` statements to select between hardware features (if they are available) and software implementations (if no hardware is available). In some cases, that is actually expensive. However, LLVM is allowed to look at the user's ACTUAL hardware, and optimize out every one of those `if` statements. The result is as though Apple made a custom openGL driver JUST for their computer. – Cort Ammon Nov 10 '14 at 15:03
-
Thanks for detailed response. C# is also normally JIT-ed and it is also statically strongly typed, so it should always be faster than Python (unless the compiler is implemented poorly). – Den Nov 10 '14 at 16:58
-
@Den: you are right that C# can always be as-fast-or-faster than Python (And, thanks to IronPython, there is data to show it is actually faster). However, the rule of thumb may change over time. While C# *can* always be as-fast-or-faster than Python, in practice each language opens itself up for some optimizations, and makes other optimizations tricky to see and exploit. This already happened to assembly in the last few decades: you CAN write the fastest code in assembly, but in practice, C/C++ performs better in most cases because it is easier to write an optimizing compiler for C/C++. – Cort Ammon Nov 10 '14 at 17:17
-
@Cort Ammon: Cython (and I think Nuitka) compile Python to C. I've used Cython and achieved over 10x speed increase compared to Pypy and other JIT implementations. It does depend exactly what your code is doing. – Jan 17 '16 at 15:36
-
@gecko I thought Cython was the nickname of the reference implementation of python (a retroactive name). Did someone actually decide it'd be fun to name their compiler that as well? 10x seems quite impressive. What kind of code are you running through it? For the kind of code I've been writing, I've found Python to be about 10x slower than C, and Py Py runs faster than the reference python, which means that compiler you mention may actually be compiling python code to something faster than hand written C! – Cort Ammon Jan 17 '16 at 16:21
-
@Cort Ammon: CPython and Cython are different projects and one has an extra P in its name. The first is the reference interpreter and the second is a Python-to-C transpiler. The code was graph traversal (CPU work, no IO) for a type of traversal not supported by scipy. – Jan 17 '16 at 20:58
Yes. As a trivial example we can make every compiled language interpreted by having the interpreter interpret assembly or whatever the compiled result is. Dually, we can bundle the whole interpreter + the program into one executable giving a "compiled" result.
You didn't say "compiled well" :)

- 11,678
- 3
- 46
- 51
It depends on your definitions of interpreted and compiled.
For example I recall that Perl has certain constructs that can only be interpreted at runtime, which makes it impossible to compile.
On the other hand interpretation usually implies that you don't get to parse all the source code before the program is started. I cannot come up with an example from the top of my head, but I wouldn't be surprised if this means some languages have to be compiled.
Finally it's unclear how such features as source code generation (e.g. Lisp meta programming, C macros, Eclipse EMF) and bytecode manipulation (e.g. AspectJ) should fit into this picture.

- 301
- 1
- 8
-
-
1@wannik it depends on what you mean by 'compiled' and 'interpreted'. Here is a list of [perl 6 compilers](http://perl6.org/compilers/) and they are compilers. But having things at runtime doesn't change the fact. Java is considered to be a compiled language, but reflection is something that isn't done until runtime. [You can also create java classes at runtime](http://robsjava.blogspot.com/2013/03/create-java-classes-at-runtime-from.html). Note that languages are not interpreted or compiled - implementations are. – Nov 10 '14 at 06:52
-
1
-
2@mkalkov that says its not possible to do static analysis of perl. One similarly cannot do static analysis of Java code that makes use of reflection. That does *not* mean that it is not possible to compile (as it is possible to compile Java). Note also that perl6 has similar undecidable problems and yet it can be compiled to other targets such as [CLR (.net / mono)](https://github.com/sorear/niecza). You may also wish to look at [B::C](http://www.perl-compiler.org) which compiles to C. – Nov 10 '14 at 06:55
-
@MichaelT I think the word 'static analysis' should be highlighted here. Are all Perl compiler need a runtime? – wannik Nov 10 '14 at 07:10
-
@MichaelT, indeed, which just shows how languages with similar features can be labelled differently. – mkalkov Nov 10 '14 at 07:11
-
1@wannik all languages have a runtime. There's the JRE, there's mono, there's the runtime for objective C. The lack of the ability to decide how some code will behave before it is run does not mean that the code cannot be compiled. Consider [NSSelectorFromString](https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Miscellaneous/Foundation_Functions/index.html#//apple_ref/c/func/NSSelectorFromString) in Objective C - a compiled language with dynamic invocation that is not decidable at compile time (also accessible with objc_msgSend function call). – Nov 10 '14 at 07:16
-
For a C program there's no reason the compiler can't be wrapped in about the follow two lines:
cc -g source.c
./a.out
Which makes it pretty interpreted. I added the -g to emphasize that it's possible to keep C code close to the source.
And any interpreted language can be compiled for efficiency. Javascript can be uglified. Python drops those lovely .pyc files.
In short: the dichotomy you have been taught is nonsense. Programming languages exist on the entire spectrum from native code to virtual machine code to DSLs that you can write in Java or Scala to interpreted chains of compiled code feeding into each other, which is precisely the Unix philosophy (and any Bash script).
Languages described as interpreted tend to not have "compile" time errors. But then there's lint collectors which may as well be built into the compiler and halt if an error happens, much in the style of my toy code above.
Perhaps interpreted means more "dynamic," which just means fewer things cause compile errors or runtime errors. But even a language like Java, which people call stricter, can load malformed classes and throw runtime errors.
And a "compiled" language like C compiles into assembly/machine code. And what is machine code if not code that is interpreted by the CPU? Compiled code has to land somewhere.
Which is just the same process that the Java compiler goes from Java source to Java bytecode, which is intepreted by the JVM in much the same way Python code is interpeted by the Python runtime. So we have a compiled language compiling into an interpreted one which runs on a virtual machine that is programmed in a native language (C++).
And of course toy languages like brainfuck you can implement in Java or C or Ruby, and they're simple enough you can write them in assembly just as an exercise in learning assembly. But then again Ruby can be put on the JVM via JRuby, which makes Ruby a lot more like a compiled language.
Get it yet?

- 2,212
- 1
- 15
- 26