-1

I read on the wikipedia article for Common Language Runtime that one of the benefits that the runtime provides is "Performance improvements".

Executing managed code (Or bytecode) must surely always be slower due to additional overhead for JIT compilation than executing native code. How then is it possible that the CLR causes "Performance improvements"?

Update:

I have looked at the question and answers to What backs up the claim that C++ can be faster than a JVM or CLR with JIT?, but it has not been helful as that question is actually asking why C++ would be faster rather than slower. What I am interested in is how it is possible, from an architectural point of view, that managed code could lead to performance improvements

neelsg
  • 473
  • 5
  • 13
  • It uses [just in time compilation](http://en.wikipedia.org/wiki/Just-in-time_compilation) techniques as said on that wikipedia article. – Basile Starynkevitch Sep 10 '14 at 07:44
  • 2
    Oh boy, what a poorly written article. This would be so much easier to support or debunk (or even decide which to pursue) if it said *relative to what* there are performance improvements. –  Sep 10 '14 at 07:45
  • @delnan: Then please edit the wikipedia page. – Basile Starynkevitch Sep 10 '14 at 07:45
  • Your understanding of how virtual machine works is wrong. – SK-logic Sep 10 '14 at 15:42
  • @SK-logic Would you care to elaborate? – neelsg Sep 10 '14 at 16:43
  • @gnat I don't think this is a duplicate. I did not compare it to C++. I just wanted to understand what "performance improvements" the wikipedia article is referring to. I am not trying to determine which is better, I am trying to understand the CLR architecture – neelsg Sep 10 '14 at 16:47
  • @neelsg, VM does not necessarily interpret anything. Managed code must not "surely be slower". And JIT's got a number of potential performance benefits vs. static code due to availability of the runtime profiling and tracing information. – SK-logic Sep 10 '14 at 16:47
  • 1
    http://meta.stackexchange.com/a/194495/165773 – gnat Sep 10 '14 at 17:36

2 Answers2

6

A CLR implementation does not necessarily interpret anything. In fact, the desktop CLR by Microsoft doesn't even have an interpreter, it always JIT compiles everything. So while CIL bytecode is read from disk and kept in memory, it is compiled to proper, fully fledged machine code which is then executed. Thus only slowdowns that can be attributed to the execution model (as opposed to object model, memory management, CLR services, libraries, etc.) are:

  • Higher launch latency, i.e. the period between starting an application and its functionality being all there. This step can even be skipped via NGen.
  • Missed compiler optimizations caused by the desire to keep the aforementioned latency reasonable,
  • I don't mean to ask stupid questions, but this does not seem to answer why it would **improve** performance, only why it may not degrade performance – neelsg Sep 10 '14 at 16:42
  • 1
    @neelsg You're right, it doesn't, because (as I wrote in a comment earlier) I don't even know compared to *what* it's supposed to be faster. So instead I focused on the misconception in your question (that it would be slower). –  Sep 10 '14 at 16:52
0

Improve performance compared to what exactly?

  • Compared to most compiled executables, a JIT compiler has the advantage of knowing exactly what environment it's running in. It can use the latest instruction set extensions and apply optimizations that work only on that specific CPU.
  • Compared to code that does manual memory management that isn't optimized very carefully, a modern generational garbage collector is very efficient, especially if you have lots of very short-lived objects. Allocation is basically free.
Michael Borgwardt
  • 51,037
  • 13
  • 124
  • 176
  • Though in all fairness, modern GCs make a trade off to achieve that performance: They collect less eagerly, so the total memory footprint is usually higher. Also, efficiency has two sides: Throughput and latency, and most designs heavily optimize for one at the expense of the latter (e.g. stop the world to improve throughput, or guarantee maximum pause times at the cost of abyssal throughput). –  Sep 11 '14 at 10:52