26

We're all too familiar with waiting for compilation, especially on large projects.

Why isn't a thing to interpret a codebase for quick iterative development instead of generating code for a binary each time?

Is that because if we are compiling at -O0, most of the compilation time comes from parsing, so it won't matter much? Or is it because the effort to develop an interpreter is too high (e.g. for languages with a lot of features like C++)? For a language with a small standard like C, this seems like a reasonable approach instead of waiting for compilation each time you make a change.

Laiv
  • 14,283
  • 1
  • 31
  • 69
gust
  • 377
  • 3
  • 5
  • 10
    Multi-tiered compilation strategies definitely exist: pretty much every JS JIT (Safari's JavaScriptCore, Chrome's V8, Firefox's SpiderMonkey) has both an interpreter (for low latency, low throughput execution of code that isn't JITed yet or isn't called often enough to be worth JITing) and a JIT that compiles hot paths (higher latency, higher throughput). JavaScriptCore even has 3 layers IIRC (2 different JITs). If I had to guess why it's not more common, keeping up two implementations of the same semantics sounds like it'd be a) extra effort, b) hard to keep in sync. – Alexander Jan 19 '23 at 01:37
  • 4
    You might be interested in [Cling](https://root.cern/cling/) ([manual](https://root.cern/manual/cling/)), part of an analysis framework used a lot in particle physics. It's a C++ interpreter, which can run macros from a file and function as an interactive command line interpreter. I left physics just as this was coming out, but I had a lot of experience with its predecessor CINT. I do not remember it fondly. – Chris H Jan 19 '23 at 13:07
  • 4
    *We're all too familiar with waiting for compilation, especially on large projects*. It's 2023, not 1983. `make`, and software like it has been pervasive across *all* big projects on **all platforms** (Windows, Unix/Linux, VMS, MVS, etc) for 30 years. And of course it's been on Unix for 45 years. Thus, if you are in fact waiting for compilation on large projects, then you are manifestly Doing Something Wrong. – RonJohn Jan 19 '23 at 14:58
  • 7
    Compiled languages are generally certainly *capable* of (re)compiling large projects quickly after changes (because you should only be compiling small bits of it at a time, and one change shouldn't mean recompiling everything). If compilation frequently takes long, it's likely due to insufficient modularity (on one or more of many levels). I'd suggest taking some of that time that you're waiting for compilation and spending that trying to figure out how to make compilation for your project take less time (which is not to say there would be developer resources available to work on that). – NotThatGuy Jan 19 '23 at 16:51
  • 11
    @RonJohn unless your compiler is infinitely fast, you're going to be waiting for code to compile (and link) after modifying anything... the only question is how long you'll have to wait. If you're Doing Everything Right, hopefully you're only waiting a few seconds... but even then you occasionally have to do something (like modify a widely-included header file, or check out a different branch and do a clean/full compile from that) that can cause a longer wait. – Jeremy Friesner Jan 19 '23 at 21:02
  • @JeremyFriesner hopefully you're not doing that very often. But when you do have to... take a walk (it's good for you) and get a cup of coffee. – RonJohn Jan 19 '23 at 21:20
  • Projects that you use C++ to build are _large_, for very _large_ values of _large_. When you're doing your work (properly) you're in some small module and you build and run unit tests and that goes very fast. When you need to use the _system_ you're building then things slow down. Many times/mostly because the system itself is so large. (Not that you can't build exceptionally large systems in _other_ languages: LISP comes to mind immediately. But not JS, Perl, or even Python (IMO).) – davidbak Jan 19 '23 at 21:46
  • I know my experience is dated yet 40-odd years ago, 'everyone knew' that compliers were overall faster and more efficient than interpreters. Is the suggestion here that that that's changed? – Robbie Goodwin Jan 19 '23 at 22:41
  • It basically is a thing - that's what IDEs do. – OrangeDog Jan 19 '23 at 23:17
  • 1
    @RobbieGoodwin no. But generally it's quicker to run something in an interpreter once than it is to compile and run it natively once. – OrangeDog Jan 19 '23 at 23:17
  • @OrangeDog Thanks and how does that fit either the Question or its exposition? Apart from anything else, why would Mr Average Developer want to run anything once? – Robbie Goodwin Jan 19 '23 at 23:27
  • 2
    @RobbieGoodwin I usually run a build once, see that it's still not behaving quite the way I want, make a change to the code that I hope will improve its behavior, build it again, run it again, and repeat until it is working the way I want it to. That's a lot of running-it-once. – Jeremy Friesner Jan 19 '23 at 23:39
  • @Jeremy Yes, that's a lot of running it once and yet, that reads like exclusively the development stage. For you, what happens after you've got it right? Does the finished version sit solely on your own kit, or are you putting it out to multiple users, who might be paying for it? – Robbie Goodwin Jan 19 '23 at 23:47
  • 2
    @RobbieGoodwin yes, users eventually get a build from the daily build machine... but as a developer, the development stage is where I spend my time :) – Jeremy Friesner Jan 19 '23 at 23:51
  • @JeremyFriesner Doesn't that take us tight back to the OQ, 'Why are… compiled languages not interpreted for faster iteration?' Isn't the Answer still that in the short term - as in solo runs - interpreters might get there quicker but in the long run - as in completed projects - compilers win hands down? To me, it's not at all clear why the OQ Asks about 'commonly compiled languages' rather than the very different 'compiled languages, commonly…' Either way, how is comparing compiled and interpreted languages not like comparing apples and pears, if not chalk and cheese? – Robbie Goodwin Jan 19 '23 at 23:59
  • 1
    @RobbieGoodwin the question is about the development stage – OrangeDog Jan 20 '23 at 09:48
  • 1
    Interpreting the code would just shift a lot of the time spent compiling the code to time spent running the code. What you probably want to do is write more *test harnesses* that can exercise smaller chunks of code for testing, rather than running your entire large project in its final, monolithic state. – chepner Jan 20 '23 at 13:51
  • 1
    unless you 'interpret' by compiling and running the compiled result in a vm, it's difficult to verify that the intepreter actually behaves exactly the same as the compiler. if it doesn't you may end up with some really confusing bugs. – Silver Jan 20 '23 at 14:21
  • It's not really much of an issue anymore and it's not presented itself as a problem, here, in quite a while. It's analogous to the situation with colds and flus. They've been so infrequent since the 1900's (at least for me) that I almost forgot what they're like. For your set of use-cases, the key phrase is "each time you make a change", suggesting that you're running, making small, frequent, modifications, re-running, and so on. Aside from the issue of good management (e.g. using "make" to minimize re-compilation), many compilers now do incremental compilation with caches to minimize re-do's. – NinjaDarth Jan 21 '23 at 06:11
  • This is typically the task of a modern IDE. – Thorbjørn Ravn Andersen Jan 21 '23 at 06:41

10 Answers10

35

I refute the premise. There are interpreters / REPLs for compiled, static languages, they're just not as much part of the common workflow as with dynamic languages. Though that also depends on the application. For example, scientists at CERN work a lot in C++ in the Root framework, and they also use the Cling interpreter a lot, an approach which combines many of the advantages of a fast compiled language and a slow interpreted one like Python, especially for scientific purposes.

With some other languages it's even more drastic. Haskell is a static, compiled language (in some ways even more static than OO languages), but it is very common to develop Haskell interactively using GHCi, either as a REPL (see the online version) or just as a quick typechecking pass to highlight what needs to be worked on. Once something is ready implemented, it'll then be part of a library that is always compiled, resulting in fast code, and that can then be called in either a fully-compiled program or in another interactive session.

Of course it can also go the other way around: typical interpreted languages like Python, JavaScript and Common Lisp are all possible to compile at least in some senses of the word (either JIT or a subset of the language can be statically compiled). Though in my opinion this approach is way more limited than starting with a strong statically typed programming language and then using it more interactively, it can still be a good option for optimising the bottleneck parts of an interpreted program, and is indeed commonly done.

leftaroundabout
  • 1,557
  • 11
  • 12
  • Working with ROOT is sort of comparable to interpreted python, but I think you'd have to be a bit excentric to try a to run anything with a complicated dependency structure that way. For example, how would you run the unit test [here](https://gitlab.cern.ch/atlas/athena/-/blob/master/Simulation/ISF/ISF_FastCaloSim/ISF_FastCaloSimEvent/src/TFCSPredictExtrapWeights.cxx#L323) using ROOT's interactive interface? I'd love to know, but I think it would take me all day to work out the dependencies. Easier just to let CMake do its thing. – Clumsy cat Jan 20 '23 at 08:51
  • 1
    @Clumsycat not sure how you'd do this in Root / Cling, but in Haskell the way it works is you point `cabal repl` (or `stack ghci`) at the module you're currently working on, it automatically resolves all the dependencies, if necessary downloads packages and compiles dependency modules similar to how `make` would do it, and then loads GHCi with those dependencies plus all the definitions in the module. You can then run any interactive experiments in the repl, including both already defined tests and new ideas. – leftaroundabout Jan 20 '23 at 09:06
  • 1
    Common Lisp was from day zero defined as a compiled language. The first language definition book has extensive coverage of compilation. The current standard has extensive coverage of compilation semantics: http://www.lispworks.com/documentation/HyperSpec/Body/03_b.htm – Rainer Joswig Jan 20 '23 at 20:11
18

Why isn't a thing to interpret a codebase for quick iterative development instead of generating code for a binary each time?

Many languages, including C and C++ don’t lend themselves to repl style interpreters. Making a one line or even one character change can have widespread impact to the behavior of a program (consider changes to a #define for example). Somewhat ironically, this same sort of avalanche effect of small code changes leading to large program changes also makes incremental compilation very difficult. So languages that take a very long time to compile will tend to be ones that are also troublesome to interpret.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • 2
    This kind of macro-avalanche programming style is largely eschewed in C++, and for lots of languages with long compilation time the argument doesn't hold at all. Sure it is still the case that some changes you could make affect huge parts of the code base, but that's also true for e.g. Python. – leftaroundabout Jan 19 '23 at 13:11
  • 9
    @leftaroundabout it's not just macros. There's a style of programming that eschews runtime polymorphism for compile-time polymorphism, and that can easily lead to the avalanche effect described here. – Caleth Jan 19 '23 at 16:27
  • 4
    @Caleth true, but that is again quite specific to C++ and in particular to its duck-typed template mechanism. It can be largely avoided with good use of [concepts](https://en.wikipedia.org/wiki/Concepts_(C%2B%2B)) or analogous features in other languages. – leftaroundabout Jan 19 '23 at 16:35
  • 8
    @leftaroundabout - it doesn't matter how eschewed a feature is. As long as it's in the spec, language implementers need to respect it. Yes, other languages made design decisions that make them easier to interpret. That's the entire point of the question. – Telastyn Jan 19 '23 at 19:55
  • 4
    _"Somewhat ironically ... small code changes leading to large program changes..."_ - that is not an example of irony, that's just an example of a counterintuitive phenonemon. – Dai Jan 19 '23 at 20:17
  • 11
    @Dai It seems you quoted the wrong part of that sentence. It doesn't claim small code changes leading to large program changes is irony. The irony is that languages that take long to compile are also troublesome to interpret, so the idea "let's interpret to avoid long compilation time" doesn't work easily. – JiK Jan 20 '23 at 08:26
  • 1
    @JiK _"The irony is that languages that take long to compile are also troublesome to interpret, so the idea "let's interpret to avoid long compilation time" doesn't work easily."_ - sorry, but that's still not an example of irony. – Dai Jan 20 '23 at 10:20
9

Comparing say python and swift, in python even simple checks are not made until runtime. I don't actually know if my program runs until every code path has been executed.

On the other hand, swift which started out notoriously bad, will nowadays only recompile changed files and will recompile methods in other files if they are affected by changes, not even complete source files. The recompilation is so fast that it happens while you type your program.

Now Bjarne Stroustrup (I think) has said that modern C++ would have been impossible in 1985 when C++ was invented because no machine from 1985 would be able to compile it in any reasonable time. We do have powerful computers, and they are used.

But the real answer to your question is: Languages like C++ are not interpreted because nobody is willing to invest the time and money to build a fast interpreter, and nobody is willing to pay enough for the ability to use one. And there is the question how fast an interpreter will run for a project with a million lines of code.

gnasher729
  • 42,090
  • 4
  • 59
  • 119
6

We're all too familiar waiting for compilation, especially on large projects.

It sounds like the mix of languages you've been using didn't include Go.

Why isn't it a thing to interpret Go? Because the compilation is already fast enough for human edit->debug loops. On purpose.

Some languages sensibly focus on "time to execute this code in production", while Go included "time that developer waited for compilation" as an explicit design goal from the very beginning. And it shows. Just pull out your stopwatch to verify.


Suppose you changed a single character of source code and clicked "Save".

A well crafted Makefile should be able to recompile a single source file, then invoke the link editor and start running your tests. If your environment has a rather creaky setup that does a lot more work than that, it's time to carefully examine those make deps.


Consider using the numba @jit decoration on your interpreted python functions. Comment it out if you don't care to wait on the compiler overhead.


Implementing both interpreter and compiler for same language is a whole can of worms. The principle danger is that semantics will be different, that is, same source produces different results in different execution environments. In fairness, this is also a concern for -O0, -O1, -O2, -O3 settings, or the more specific switches, but there's usually some hand wavy excuses that can be invoked when it turns out that one version behaves differently from another. Quite often the ANSI C notion of "undefined behavior" will rear its head.

If you're authoring "portable" Scheme or Common Lisp, there are quite a few interpreters and compilers to choose from, with diverse compiler options to set. In my experience, time to compile a function I just edited has never posed interesting delays. There is good support for incremental development. OTOH, doing an ASDF compile of a large package might be a bit time consuming when there's lots of source text to analyze.

J_H
  • 2,739
  • 11
  • 19
  • 3
    [Interpreted Go is a thing](https://github.com/traefik/yaegi). There is also [this one](https://github.com/cosmos72/gomacro) and [this one](https://github.com/open2b/scriggo). – 9072997 Jan 19 '23 at 16:12
  • And it is not so fast. Try to compile Kubernetes or something big. Fast is illusion. – akostadinov Jan 19 '23 at 19:28
  • 4
    @akostadinov, I interpreted OP's "waiting for compilation" complaint as "I just spent ten seconds making a one-line source code change, and now I'm waiting ten minutes on a build", which derails one's train of thought in the middle of an edit-debug cycle. Clearly a million new source lines will take a while for any technology to process, and two million is likely to take twice as long. I didn't see OP's question as being about lazy interpretation, where we win by deliberately ignoring a large number of source lines which some unit test never exercised. – J_H Jan 19 '23 at 20:05
  • Semantic differences between interpreted and compiled execution are a sign of a language deficit (it's too complicated to implement reliably). Considering undefined behaviour has its place when talking about a language that has it, but it's such a weird and dysfunctional misfeature that one shouldn't consider it when talking about in-principle questions like interpreted vs. compiled - it's a truly special case. – toolforger Jan 21 '23 at 09:06
  • Semantic differences are either illegal, then either compiler or interpreter are wrong. Or they are covered by undefined / unspecified : implementation define behaviour in C or C++ and then it’s the programmers fault. – gnasher729 Jan 21 '23 at 13:47
  • The "unspecified" aspect comes up quite a lot, both for languages and for popular libraries. Think about some poor app developer trying to make `volatile` and threads work properly in java1.4. It wasn't exactly write once, run anywhere. It took a long time to hammer out a memory model with happens-before semantics, in deployed JVM implementations. And then there's "we specified it but you didn't test it." Think about porting a cPython app to Jython, where instead of immediate refcount-went-to-zero behavior the spec says "the lazy GC will finalize your dead object later. Whenever. Maybe." – J_H Jan 21 '23 at 15:14
5

There are REPLs and Interpreters for C++ and most compiled languages

https://replit.com/languages/cpp

http://www.hanno.jp/gotom/Cint.html

But they aren't used often

Why? Because compiling adds checks for things and enforces extra rules like type safety, and those languages were built around enforcing those extra checks. Compiling is a feature!

For example.

Many interpreted languages use "Duck Typing". The interpreter sees if it looks like a duck and quacks like a duck, then it's probably a duck. I know plenty of Ruby devs that swear by it, and it's a great thing until you use an object that doesn't have the right functions and the program blows up in production!

This would never happened in a compiled language. Because we took minutes (and occasionally hours) to meticulously check for it beforehand.

Projects that use compiled languages have decided it's worth more time up-front to ensure rules such as type safety are enforce.

Projects that use interpreted languages have decided it's worth more testing and QA after-the-fact to get to play a bit fast-and-loose with the rules.

EDIT

Why don't developers use interpreters and then compile once every so often? Best of both worlds!

Because you can quickly get into a nightmare situation that causes multiple compile issues because an interpreter won't be able to check everything in anywhere near real-time. Any interpreter will have to play a little fast-and-loose with the rules. You'll probably end up spending any time-savings fixing compile issues only found doing it "the long way."

In practice, stuff like incremental compiles and breaking things into libraries lessens the pain of compiling. Some C/C++ code uses Void* pointers, which negate type saftey. On the flip side, some interpreted languages like TypeScript allow and encourage type safety.

If you want a real-world example, look at some of the "hot-deploy" development environments in Java, which tried to do what you're talking about. I almost always turned them off because they were hit-and-miss.

sevensevens
  • 368
  • 1
  • 5
  • 4
    https://replit.com/languages/cpp is neither a REPL or an interpreter, unless you are using a *very* loose definition of REPL. Also, type safety is possible in an interpreted language just like type un-safety is possible in compiled C. – 9072997 Jan 19 '23 at 16:31
  • 2
    @9072997 well said! Cint / cling _are_ proper interpreters though. And while I fully agree that type safety and compilation are in principle orthogonal, in practice strong static types do go more naturally with compiled languages, and are _way_ more commonly used in those languages than in interpreted ones. – leftaroundabout Jan 19 '23 at 16:49
  • 5
    Most of this doesn't make sense. There is no such thing as a "compiled" or "interpreted" language. Every language can be implemented by a compiler and every language can be implemented by an interpreter. Interpretation and compilation are traits of the interpreter and compiler (duh!), not the language. If English were a typed language, the term "compiled language" would be a type error. Case in point: All current Ruby implementations are compiled, whereas interpreters for C++ exist. So, by this answer's logic, Ruby is more type-safe than C++, since there are no Ruby interpreters. – Jörg W Mittag Jan 19 '23 at 19:33
  • This answer does have some real problems. Compiling C++ is more than type checks these days. Technically, even C++98 was Turing-complete to compile, but modern C++ with `if constexpr` makes this common. – MSalters Jan 20 '23 at 12:22
  • Somebody needs to do the work to make this happen. It is a more complex scenario so it is usually only done if there is nothing more useful the developers can create for the language. – Thorbjørn Ravn Andersen Jan 21 '23 at 04:00
2

Sunk cost.

For languages which are commonly cross-compiled, the debug tooling for compiled binaries will receive a lot of time and attention by necessity. Debugging a locally executed, simulated version of a cross-compiled project is only useful for a subset of the problems you'll need to debug, and interpreters can only produce those simulated versions. Compilation can produce both simulated versions (and often does, for unit tests) and the real version for on-chip debugging.

By example, C and C++ are the go-to languages for bare-metal microprocessor programming, so there is an entire industry pumping resources into making the debug tooling better. With tools like GDB (and the family of tools built to support it, like OpenOCD) being extremely mature, it makes the prospect of writing a C/C++ interpreter for iterative development less attractive - you'd have a lot of ground to cover just to get debugging feature parity with GDB.

Add to this, most difficult programming problems involve quite a bit of thinking. Faster iteration stops being useful to the developer after the point at which the compile time is equal to the amount of time the programmer is thinking about the problem between builds. Personally, I find that the build time of a large embedded C++ project I work on (~30s for cross-compile, ~60s for locally executed unit tests) is more than fast enough. I rarely find myself staring at the screen waiting for it to complete, I return from thought to see that it's done.

Willa
  • 235
  • 1
  • 3
1

Frame challenge. In a modern machine interpreting the language is not needed, compilation is very fast.

We're all too familiar with waiting for compilation, especially on large projects.

This is because many times the developer applies without thinking the steps clean compile, or sometimes they are scripted in without a lot of consideration. When you realise that you used a wrong variable and you want to change a single name in the code does it make sense to rebuild the entire project? Letting the compiler see what changed and rebuild only those files can save a lot of time. You better reconsider when it is appropriate to do a clean before you build.

FluidCode
  • 709
  • 3
  • 10
  • Of course, sometimes doing a not-strictly-necessary "make clean" is a feature... :) https://xkcd.com/303/ – Jeremy Friesner Jan 19 '23 at 21:10
  • Rust is a great example for this, a clean build can take well in excess of a minute. Recompiling when you changed only a few files in your project, in a debug build? Single digit seconds. – jaskij Jan 20 '23 at 20:53
0

If a source code construct will have the same meaning every time it is executed, converting it into non-optimized machine code would generally not be much more expensive than interpreting it. The primary advantages interpreters have over compilers are:

  1. It's easier to "sandbox" an interpreter than a compiler. If an interpreter includes no forms of I/O except a keyboard and display, and no means of accessing memory without bounds checks, even a maliciously-designed program would be unable to do anything beyond read keystrokes that are made available to it and render graphics in response.

  2. They can easily support dynamic languages where the meaning of a piece of code may vary based upon outside factors. For example, in Javascript, function foo(x,y) { return x+y; } may perform arithmetic addition or string concatenation based upon the types of the operands, which a compiler would have no way of knowing at the time it's processing that function.

  3. In environments that would require keeping the source code in memory, an interpreter may be more space-efficient than a compiler that would require keeping both the source code and machine-code equivalent in memory simultaneously.

  4. When using a language where the behavior of a piece of code can only depend upon other code that has already been executed, an interpreter need not examine parts of the source text that are never actually executed.

While it may be possible and even practical to design a C or C++ interpreter for use in situations where sandboxing was required, in all other regards a non-optimizing compiler would be faster and more efficient.

supercat
  • 8,335
  • 22
  • 28
  • Side note: `x+y` can fairly easily be compiled to mean different things by having the plus operation be a method on objects, that you can define to do whatever you want (essentially no different from `x.add(y)`). – NotThatGuy Jan 19 '23 at 17:00
  • @NotThatGuy: A compiler could generate code that treats everything as an object reference, and generate code for `x+y` that examines the types of the objects thus referred to and behaves appropriately, but such machine code would essentially be acting as an interpreter which examines the type information for `x` and `y` and selects a course of action based upon it. – supercat Jan 19 '23 at 17:05
  • 1
    Compiled code doesn't (necessarily) need to generate different code for `x+y`. It can just always call `x.add(y)`, and then `x` will define what that operation does. If that still falls under "essentially interpreting", then it seems so would most compilation. – NotThatGuy Jan 19 '23 at 17:33
  • 1
    @NotThatGuy: If a function's argument `x` is known to be a numeric primitive, then the most efficient machine code to handle the addition of `x+y` would be incapable of meaningfully handling `x.add(y)` if the function were passed anything other than the desired type, but would likely be more than an order of magnitude faster than polymorphic code could be. A Javascript just-in-time compiler may be able to achieve a compromise level of performance by producing three versions of the machine-code for a function--one which can only be invoked on things known to be a 32-bit integer, one... – supercat Jan 19 '23 at 18:02
  • ...that can only be invoked on things known to be numbers (though not necessarily 32-bit integers), and one that can be invoked on anything. If `x` is used 50 times in a function, this will allow 50 tests for whether `x` is an integer to be replaced with a single test when the function is entered, plus some tests on arithmetic operations to detect if `x` goes outside the range of integers, and switch to using the any-floating-point-number function if so. – supercat Jan 19 '23 at 18:04
  • 1
    Yes, that's why I said "necessarily". – NotThatGuy Jan 19 '23 at 18:07
  • Using a JIT x+y is often translated into “if x.type is integer and y.type is integer and x+y doesn’t overflow then result is integer (x+y) else x.add(y).” Or “… else recompile the code”. – gnasher729 Jan 21 '23 at 13:51
  • @gnasher729: What's significant is that JITs are able to optimize aggressively based on things that are "probably" true, because they have the ability to recover should any assumptions prove faulty. Unfortunately, static compiler development is pushing for abstraction models that employ comparably aggressive assumptions, but without any fallback strategy beyond blaming the programmer. – supercat Jan 21 '23 at 17:39
-1

Languages that are normally interpreted or just-in-time compiled have access to metadata that are often lost when the compilation is finished. For instance, it may still be possible to access the fields of the structure by the string name, iterate over them, get information about they actual type. It may be possible to find and call a function by the string name, or read the annotation placed on it. Interpreted and just in time compiled languages quite often have lots of reflection features that are part of the standard and even if not very common in the user code, are heavily used in the libraries. I do not really imagine how something like Hibernate could be implemented for a language like C++. Completely different world.

h22
  • 905
  • 1
  • 5
  • 15
-1

That depends on the language design and features. LISP had an interpreter first in around 1960. 1962 the first incremental machine code compilation was implemented for it. Incremental means that the compiler can compile any function, small or large, and load the machine code into the running Lisp system. Source-level interpreted code can be freely mixed with machine compiled code. That's to this date the dominate way to use Lisp for application development: the code gets mostly compiled, both interactive or as files. Additionally there might be a source-level interpreter - but compiled code and source-interpreted code can be freely mixed.

So there are a bunch of strategies to get fast development times with Lisp:

  • use a source-level interpreter -> this requires no compilation, but compiled code can be called transparently. The drawback is usually slow runtime performance.

  • use an incremental compiler -> this compiles small units of code (expressions, functions, ...). The compiled code is then immediately usable in the running program, even though it is often machine code. The incremental compiler can also be used to immediatly compile the code which gets entered into a Read Eval Print Loop (-> REPL) - the incrementally compiled code then gets evaluated. Compiled code and interpreted code can be freely mixed. If one uses an incremental compiler, one gets the advantage of fast code and fast interactive development.

  • use a file or block compiler -> this compiles single files or blocks of files. The compiled code is written to disk, but can be loaded into a running program. Typically one might either compile the code for debugging or for optimizations to improve runtime speed.

  • use an image-based system -> whole memory dumps of running programs can be saved and restarted. The images contain all code (compiled and interpreted), development information and runtime data. Typically one interacts with such a running system via a REPL. Compiling code and using the compiled code is immediate. Restarting the development environment is fast, since all information is already saved with the image.

For delivery of applications:

  • use a whole-program compiler -> this compiles a whole program to a binary. This usually only done for deliver of applications and not during development.

  • use a treeshaker (and similar delivery tools), which create optimized binaries without development information.

Rainer Joswig
  • 2,190
  • 11
  • 17