

Ask YC: An old topic, compiled vs interpreted? - PieSquared

Okay, before I even say anything, I am pre-emptively sorry, because I KNOW that this has been discussed, rediscussed, analyzed, reanalyzed, and in general talked about to death all over the internet.<p>Essentially, my question is: Is there any advantage to having a language be interpreted?<p>I don't mean the current interpreted languages vs the current compiled ones. What I mean is, all else being equal, how important is the mode of execution?<p>Another question: when is reflection really useful? (I'm asking this because I think it would be rather difficult to create a compiled language which would support reflection, although it'd be easy for an interpreted language.)<p>Thanks
======
SirWart
Compiling means changing something from one source language into another. The
traditional meaning is compiling a high level language into machine code which
is then assembled into opcodes. That's not the only way to do it though,
Python is first compiled into a bytecode (a more compact representation of the
source code checked for syntax errors) and that bytecode is interpreted.

Interpreters are easier to write because it is not as much work supporting
different architectures compared to compiling the code. Also, interpreting
small amounts of code that will only be run once is typically faster than
compiling it and then optimizing and executing it.

Compiling however has much greater potential speedups. A compiler can make a
decision at compile time once, and that decision never has to be made again.
In an interpreted language that decision might have to be remade on every
iteration of a loop.

As you move up from more powerful languages than C, the languages get harder
and harder to compile because you know intrinsically less at compile time. In
C for example, every variable is known at compile time, so every variable
lookup can be translated into a read from some offset from the stack frame
pointer, which is very fast. The C compiler will keep a hash table at compile
time to keep track of these offsets, but at run time that information is no
longer available. In Python, you don't know all the variables that are going
to be looked up at compile time, so you have to do those expensive hash table
lookups at run time.

In the end, naively generating code for a dynamic and reflective language
won't yield that much of a speedup without some serious optimization passes,
which was typically not very helpful because the higher level languages were
used as glue code to high performance libraries or programs. As people become
more interested in using these languages for more intensive work expect to see
more effort at compiling then, although most will probably be compiled into a
virtual machine bytecode and then compiled to native machine code by the JVM
or CLR.

~~~
anamax
[http://steve-yegge.blogspot.com/2008/05/dynamic-languages-
st...](http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-
back.html)

With the exception of some lisp work done a long time ago, no one has
seriously worked on making dynamic languages fast.

~~~
johnm
Huh? You need to get out more.

Also, note that the majority of the optimizations that e.g., HotSpot does are
as applicable to "dynamic" languages as they are to "static" languages.

------
jksmith
Yegge has just addressed this, and I think he has a beautiful point: Compiled
apps are dead, unfeeling, unable to adjust. Compiled apps are like the ones
you buy in a box at the computer store and install on the piece of plastic in
front of you. They are completely unable to interact with the user (or other
other computer or whatever) beyond their specific implementation at point of
compilation. They're just the paper tape with a starting point and an ending
point.

OTOH, dynamic languages are designed to interpret conditions as they occur at
runtime. Up to this point, we really haven't been able to exploit this
powerful capability to its fullest extent, but hardware is now cheap, and
massive collections of hardware (like at Google's The Dalles plant) now exist
to provide what is essentially a virtual environment that provides units of
computing power as needed. Dynamic apps can adapt to take advantage of this.
For example, modify an app in the REPL as it's running, or hot swap code in an
Erlang app, or send lambdas wherever you want on a network running q/kdb+.

Now take that notion a step further. Why do humans have to initiate any of
that day-to-day dynamism? The answer to that question is the future of
computing and it's why we have to have interpreted languages for dynamic
environments. Compiled apps (and OSes, IMO) are not part of the future. The
next frontier after that will involve the hardware itself going dynamic.

~~~
jrockway
Uh, what? All of the major dynamic languages are compiled, not interpreted.
"eval" != "interpreter".

------
mnemonicsloth
It sounds like you're asking these questions because you're stuck inside the
scripting language box. There's a much greater range of choice open to you
than Perl/Python/Ruby on the one hand, and C/C++/maybe-Java on the other.

For example, Common Lisp implementations often compile functions as they're
entered at the interactive prompt. From your perspective, you see an
"interpreter" that runs code on the order of 100 times faster than it should.
I'm told popular OCaml and Haskell implementations work this way too.

If you're interested in portability, there are efficient Scheme compilers that
target ANSI C, which gives you portability, good integration into just about
any environment, and access to lots and lots of library code. My favorite of
these is Chicken Scheme, which you can read about at <http://www.call-with-
current-continuation.org>.

~~~
PieSquared
So... it's compiling it, but acting as an interpreter. So even the macro-
evaluation step is compiled?

I seemed to think you'd have to have a Lisp interpreter (so that the macros
would have to be interpreted) and optionally a compiler. So you can have the
'interpreter' just be a compiler which is followed by a call to the generated
code?

------
eru
Doesn't Common Lisp support reflection quite well normally, even when
compiled?

~~~
Hexstream
Yes but CL might not be the easiest language to implement either.

~~~
mark-t
That's strange. I seem to recall watching Sussman write a Scheme interpreter
in about an hour.

~~~
Hexstream
Sussman is not your average guy, scheme is overall simpler than CL,
implementing a compiler is harder than an interpreter.

------
pmjordan
Interpretation of some kind (including compiling to bytecode and
interpreting/JITing that) offers a chance that the code can be verified for
safety/"correctness", something that's practically impossible for compiled
native code for real, mainstream instruction sets. That's not to say it's
actually impossible: just currently infeasible.

Other than that, I can't think of much else: as others have said, it's no
longer a discrete choice, it's become a continuum of choices with hybrid
solutions that come very close to the ideals of both extremes. Except for that
of the simplicity of an interpreter maybe. :)

Reflection is useful whenever you need any kind of dynamic features in your
system. Even for rigidly static languages, compile-time reflection can be
useful ("metaprogramming"), although there's a serious danger of constructing
cathedrals. As it has been said before, compiled code doesn't exclude
reflection. Reflection is just metadata that gets compiled in, plus a
mechanism for accessing it.

~~~
johnm
Re: verification

Your comment is only arguably true for completely statically compiled
languages which have very weak (or no) processing model that can be enforced
(by e.g., the loaders).

------
eru
Compiled vs Interpreted is more like a continuum in reality than a binary
choice.

In theory there is no reason for one model to be less powerful than the other.
And as compiler/interpreter-writers get to be more knowledgeable this seems to
hold up in the real world more and more.

Though interpreters are generally easier to write.

------
johnm
The quick and dirty approaches are to write a simple interpreter or a simple
source to source compiler to another language which (has tools, etc.) which
handle the heavy lifting of the backend. For example, look at all of the
languages built on top of the JVM.

In terms of the hardcore, there's still a lingering fight between the static
compilers and the dynamic compilers. Though, even that is starting to erode
because the best static compilers only generate the best code by using
profiles created during instrumented, previous runs of the application (which
basically makes them poor-man's dynamic compilers). Dynamic compilers such as
HotSpot and the best Lisp implementations generate pretty good code and yet
still handle the realities of eval, dynamic-class-loading, etc. well.

------
rw
From the OP: "Okay, before I even say anything, I am pre-emptively sorry,
because I KNOW that this has been discussed, rediscussed, analyzed,
reanalyzed, and in general talked about to death all over the internet."

Including YC:
<http://www.google.com/search?q=>(compiled+AND+interpreted)+AND+(speed+OR+ease+OR+portability)+site%3Anews.ycombinator.com

------
greyman
Your question is hard to answer, since in real life, "all else IS NOT being
equal". In my life as a programmer, about 90% of my work was compiled, so I
certainly have a bias. My personal perception was, that when working with
compiled languages (mostly C++), I felt like building something. With
interpreted languages (mostly Perl), I felt more like hacking something.

------
volida
vaque question

it depends on the problem solves, and the trade between perfomance and RAD

------
newt0311
Both. Best to have a language which is initially interpreted and then gets
compiled just in time. With JVM as a prime example.

