But considering that optimized scalar code performance has moved, what, maybe 40% over the last two decades, I'm going to say "not much". Compilers are sexy, but they're very much a solved problem. If we were all forced to get by with the optimized performance we saw from GCC 2.7.2, I think we'd all survive. Most of us wouldn't even notice the change.
But in the context of JIT compilers for dynamically typed languages, in particular the space involving runtime inferred hidden type models, there is a TON of work left on the table.
It hasn't been paid much attention to in academia, IMHO largley because of a historical perspective on optimizing dynamic languages as "not classy" work among language theorists. I hope that perspective changes over time.
Not for all of the other widely-used languages that still have incredibly simple interpreters. Think how much energy could have been saved if Ruby, Python, and PHP were all as fast as your average JS engine.
I would think we'd see even better improvements if developers would move towards statically typed languages as well.
Here's an old slidedeck that talks about it: http://www.oracle.com/technetwork/java/jvmls2013wimmer-20140...
My god can't believe it was 3 years ago i read about it on HN.
That's the currency I deal in.
In fact, looking at the market as it stands right now I'd say that performance concerns are almost entirely uncorrelated with programming environment adoption.
I actually think that the reason why JS is so odd is that it is so expressive. That tends to happen with kitchen sink languages like C++.
Relevant: "There are only two kinds of programming languages: those people always bitch about and those nobody uses."
And you're gonna apply that to the darkest corners of ES6? Whee, language theory nerd paradise!
I can't wait for the compilers targeting that using all sorts of really odd idioms that in effect work like fuzz testing.
If you still think it is "expressive", you never seen an expressive language.
Are you aware who Filip Pizlo is?
And, no, JS is not expressive. It got strict limits on an available level of abstraction (unless you go into eval, of course).
JS perf is important to users though. Faster execution means fewer watts spent rendering and interacting with your favorite web page.
(Fun fact: B3's backend contains a machine description language that gets compiled to C++ code by a ruby script, opcode_generator.rb. We use Ruby a lot.)
So, naturally, we want to avoid building JSC twice. ;-)
EDIT: I wonder why the positive effect to re-discovering that not all compilers need to be like C and C++ compile speeds and that it was once upon a time mainstream, is worthy of downvotes.
Unfortunately, they didn't learn about the stuff between Oberon and 2007 that would've been nice to have in a modern, app language. ;)
The remark is tailored to those that think compiled languages can only be slow as C and C++ compilers, since they never used anything else, and then jump of joy when they use Go.
Yet if it wasn't for the VM detour of the last 20 years, that experience would probably be a current one, instead of being re-discovered.
"Yet if it wasn't for the VM detour of the last 20 years"
You mean the C++ and VM detours? ;)
No. I'm claiming it doesn't perform anywhere near the level of optimization of GCC and LLVM.
> I'd think Java can certainly be considered having highly optimized compiler / runtime. But it has almost same performance as Go and much higher memory usage compared to Go.
I disagree with what seems to be your implication that compiler optimizations don't matter (and almost everyone else who works on compilers would also disagree), but I don't really want to turn this thread into a critique of the benchmarks game, so let's just leave it at that.
Please don't jump to the conclusion that huge differences in the default memory allocation of tiny 100 line programs means there will be similar huge differences between ordinary large programs.
Notice that even for those tiny 100 line programs, the difference can be more like 2x when memory actually needs to allocated for the task.
Compilers are sexy, but they're very much a solved problem.
For sequential languages the problem has shifted: how can I get a performant compiler easily. The most interesting answer to this question is PyPy's meta-tracing, and that's work is from 2009, and far from played out.
A 40% decrease in optimization is enough to drop framerates from 60fps to 30fps easily, so I'm pretty sure we would notice it.
I'm not convinced. Raw single-thread number crunching performance is somewhere around _two to three fold_, clock-for-clock, on Intel x86, over that of 10-15 years ago. What methodology do you use to attribute only a fraction of those gains to language optimizers? And even if you are correct, why is it meaningful? Who is going to have invested energy in optimising the shit out of mundane codegen when hardware performance will have just come and stolen your thunder a few months later?
The problem we have now is that CPUs are gaining ever more complex behaviour, peculiarities, and sensitivities. I'd say compiler engineering is far from a "solved problem", even for statically-typed languages.
With mainstream CPUs, exactly the opposite is happening. CPUs are getting more complex under the hood, but less sensitive to code quality. For example, a lot of the scheduling hazards in the P6 microarchitecture have been eliminated in subsequent iterations. Branch delay slots are a thing of the distant past, so are pipeline bubbles for taken branches, indirect branch prediction is extremely capable, even the penalty on unaligned accesses is minimal.
No they're not, and won't be for long (ever?). However it does not matter because they are "good enough".
Compilers are driven by heuristics which provide "reasonable" results in most cases for common architectures. But they still leave a lot on the table. Compiler writers have to trade compile-time with execution-time.
Now we're not talking about an order of magnitude, but rather ~20%-30% in some workloads. When it matters (I guess for people like Google/Facebook/Amazon/... it translates in electricity bill and a number of racks to add to the datacenter) people may have to get down to the assembly level for a very small (and hot) part of the program.
> And only the web world is so obssessed with smart compilers compensating (impressively, but still far from being sufficient) for multiple deficiencies in the language design.
You have no idea how much compilers have to compensate for the deficiencies in C and C++'s design.
And no, thank you kind sir, but I've got a very good idea of what compilers are doing wrt. C deficiencies, I was writing OpenCL compilers for 6 years at least. Besides aliasing stupidity and byte-addressing there is nothing really bad to compensate for.
asm.js and Web Assembly.
> Besides aliasing stupidity and byte-addressing there is nothing really bad to compensate for.
Aliasing issues, C++ heavy reliance on virtual methods, too many levels of indirection in the STL, overuse of signed integers due to "int" being easier to type interfering with loop analysis, const being useless for optimization, slow parsing of header files...
For example in JS, once you prove that "o.f = v" is not going to hit a setter (or you put a speculative check to that effect), then you know that this effect does not interfere with "v = o.g" (provided you check that it's not a getter). JS VMs are really good at speculating about getters and setters. The result is that alias analysis, and its clients like common subexpression elimination, are super effective. It's only a matter of time before we're creating function summaries that list all of the abstract heaps that a procedure can clobber so even a function call has bounded interference.
This is totally different from C. There, making sure that accesses to o->f and o->g don't interfere is like pulling teeth. And the user can force you to assume that they always interfere by using -fno-strict-aliasing, which is hugely popular.
The fallback-to-C paths on the web aren't that great, though. At least not yet. Once you fall back to C, you're sandboxed into a linear heap with limited access to the DOM and JS heap. But we'll fix that eventually. :-)
As for virtual methods, it is a problem of a bad C++, devirtualisation may help, but nobody cares in general. We have a cool curiously recurring template pattern instead.
But for the integers you're right. And signedness is still the most annoying source of bugs in the infamous LLVM instcombine.
Headers are a frontend issue, nothing to do with the optimisations.
> As for virtual methods, it is a problem of a bad C++, devirtualisation may help, but nobody cares in general.
Huh? Tons of people care about devirtualization! Much of the reason Firefox builds go to the trouble of PGO (and it is a huge amount of trouble) is to get devirtualization.
> As for virtual methods, it is a problem of a bad C++
As for C++ - there is a choice. With JS the choice is much more limited. You either rely on non-standard asm.js behaviour, lose DOM access with web assembly, or tolerate the stupidity and limitations of the language that far overgrown its tiny niche.
Btw., I am yet to hear how a language with a fixed syntax and no macros can be perceived as "highly expressive".
Also, what do you mean by level of indirection in the STL? Pointer chasing or, I suspect, layers of layers of tiny functions that need to be inlined for reasonable performance?
Which ones are these? All of the statically-typed and well-designed languages I can think of are, at the least, hard to compile well, if not hard to compile in the first place. (Haskell and Rust both come to mind; there are few Haskell compilers other than GHC, and no Rust compilers other than rustc.) The languages that are easier to compile are either not statically-typed in a useful way, or not well-designed, or both.
Edit: I was really referring to language families, so including things like OCaml, Modula, etc.
The frontend is usually the hard part IMO. In WebKit, we have 100,000 lines of code for our "frontend" (i.e. the DFG compiler) and 50,000 lines of code for our "backend" (i.e. B3). That split is really interesting.
Far as Ada, I figured it would be difficult due to the large number of language features and complexities of analyzing a program for safety w/ all its rules. And any interactions that might emerge in that between features and rules. Could be easier than I think but my default belief is that it's not easy.
Frontend was awful and I never finished it, but the LLVM-style backend wad very easy to build.
Is that really the case? It seems to me that a Prolog-like resolution system, coupled with a constraint solver, would get most of the job done with little effort.
There are certainly many rules to keep track of, but in some cases the newer rules are strict generalisations of the older rules. For example, Haskell's original type classes are just a special-case of modern multi-parameter, flexible instances/contexts, etc. type classes. Likewise, we can implement constraint kinds, type class instance resolution and type equalities using one constraint solver.
My usual approach to implementing a type system is to derive a flat list of Prolog equations out of the AST and leave it to Prolog for solving. If you ask what to do with error messages, I've got a comprehensive answer, but it is not for a mobile phone typing.
My approach to the typing error reporting is quite compatible with the Prolog codegen. What I do: for each AST node which produce one or more type equations I record the location data (cache it and give it an index), and then issue a Prolog term like this: with_location(Id, Term), for each of the terms generated by this location.
with_location semantics is simple, but may depend on the presence of a backtracking. If it's a plain Hindley-Milner, there is no backtracking, and it simply do call/1(Term), continue if true, store Id and the failed term with all the current bindings and fail/0() if false.
If there is a backtracking, I store a stack of location ids and failure reasons instead, but this required patching the Prolog execution engine a little bit (luckily, I'm using my own Prolog for this).
Now, the next step is to decipher the failed term. Most of the equations are in a trivial form: equals(A, B), so I simply report that prolog_to_type(A) failed to unify with prolog_to_type(B). For the other (rare) kinds of equations I define custom pretty-printing rules.
To me this is the difference between knowing that your program will probably work after it compiles, vs not knowing until after its deployed.
Also, many of the improvements done for JS will certainly trickle down to Python, Ruby, PHP eventually.
It's also encouraging to see them opening up about future directions rather than just popping full-blown features from the head of Zeus every so often. (Not that they owe us anything... ;-)
(Edit: it's also damned impressive for 2 people in 3-4 months.)
Awesome write up btw.
I wonder if this will make in time for iOS10.
But as soon as we realized that we had such a huge compile time opportunity, of course we optimized the heck out of the new compiler for the kinds of things that we always wished LLVM could do - like very lightweight patchpoints and some opcodes that are an obvious nod for what dynamic languages want (ChillDiv, ChillMod, CheckAdd, CheckSub, CheckMul, etc).
E.g. stuff like making the in-memory IR representation better cacheable certainly sounds like it's all-upside, and LLVM should just learn from your project.
Is this a knock on LLVM then?
I wonder then specifically if this brings to light any concerns over Swift (another dynamic language, and was created by the same person who created LLVM as well). 
Seems weird that the original creator of LLVM was able to make a dynamic language such as Swift - without any problems.
LLVM was, more-or-less, designed around C++ semantics. B3 is not going to be a better C++ compiler than LLVM. LLVM is not going to be a better JS compiler than B3. They're taking different evolutionary paths.
VxWorks is great at driving martian rovers, but would be horrible at running Adobe Photoshop or Maya.
Why does everything either need to be a snub or a boon?
LLVM is great at generating compiled binaries - it just takes resources (time + memory) to do so - in practical terms, the compilation time only affects the programmers and developers running multiple compiles over a work day, working on the codebase itself. For consumers who are running the programs, LLVM makes great optimized binaries.
In a JS engine, the compiler doesn't have access to the source code until a user loads up the page, so it's in essence doing "compilation" on the spot. And in this case compilation memory and time usage matters.
Swift itself is a "compiled" lanugage - meaning that in development, a developer needs to hit that "compile" button in XCode to generate a binary file which is then distributed and run. In this case, LLVM can use all cores on that big honking Mac Pro or shiny MacBook Pro 15" retina all it wants - take all 16 cpus and 32 gigs of ram and crank it and make a nice binary that comes out optimized to run on a iPad Air.
Of course things like Java conflates the two styles (source code compiled to bytecode and then to run on the JVM) - but in the Java ecosystem there are significant optimizations in both the compile phase (ie javac helloworld.java) as well as ahead of time compilation in the JVM itself.
If you're working on a JIT, this matters. A lot. Webkit isn't the first project has has used or worked with LLVM in this manner and abandoned it. PyPy has also investigated using LLVM and stopped doing that.
LLVM isn't bad, it just isn't good at solving this problem it wasn't designed for. If anything, I think it's remarkable LLVM has worked out as well for Webkit as it did.
The conclusion is that it's mostly dead or worse, because of the performance hit when typechecked code calls untyped code.
(And also because of the details of Typed Racket, which perfectly reasonably translates into ordinary untyped Racket, but then that is then mangled by GNU lightning (!).)
"At the same time, we are also challenged to explore how our evaluation method can be adapted to the world of micro-level gradual typing, where programmers can equip even the smallest expression with a type annotation and leave the surrounding context untouched. We conjecture that annotating complete functions or classes is an appropriate starting point for such an adaptation experiment."
It used to be called interpreted language or "scripting" language - but I think the vocabulary shifted so the word "dynamic" more encompasses what the languages are about.
As to whether it is dynamically or statically typed - it's easy to tell - do you need to indicate a variable is an int or a char or a string before you start using it? and do you have to cast the variable to different types when using functions that expect a certain type? If so - it's statically typed. Dynamically typed languages allows you to give an character "5" to a function that expects an integer (ie the number 5) and then automatically converts it so for example the final result to 5+"5" is int(10). Or when you try to print it, it gives you a string literal "1" and "0" without you having to recast it yourself.
Obviously having dynamic typing makes things easier for humans, but the computer has to keep predicting what the programmer is going to do with that variable, so it has some runtime and compile time weaknesses in performance, memory usage, as well as less strictness in compile-time checking - which might allow certain types of errors to sneak by, whereas static typing is very direct - you have to instruct the computer to do every casting from one type to another, etc., and the statically-typed compiler is a sadist - it will fail all your code over and over again until you get all the types right. The difference is extremely noticeable even for a novice programmer.
says that Swift is "type-safe" (compiler is a sadist and will halt on all type errors) but has type inference so you don't have to explicitly indicate the typing of a variable. I would consider it to be heavily on the statically-typed side though - since it seems the language won't let an integer variable all of a sudden also be a string - you have to cast it properly first.
Not sure where the confusion comes from, probably from people putting buzz words to everything that is new regardless of applicability.
Either way, you are correct, swift is not "dynamically-typed" and neither is it a "dynamic/interpreted" language.
Modern statically typed languages often don't require this, because type inference, so its a bad test.
> and do you have to cast the variable to different types when using functions that expect a certain type?
Modern statically-typed languages may not require you to do this (e.g., Scala if the types involved have appropriate implicit conversions defined.)
> Dynamically typed languages allows you to give an character "5" to a function that expects an integer (ie the number 5) and then automatically converts it so for example the final result to 5+"5" is int(10).
That's the canonical example of weak typing; many (maybe most) dynamically-typed languages do not do this type of conversion.
A better test for a dynamically-typed language is what kind of error is produced by sending a value of an unexpected tpe (including, as expected, anything that can be handled by the applicable implicit conversion rules) to a function: if it is a compile-time error, the language is statically typed. If it is a runtime error, it is a dynamic language.
This really has nothing to do with the dynamic/static axis. Many definitely dynamically typed languages (python, lisp variants, etc) do not allow this kind of conversions. This is just poor language design.
Often languages which allow this kind of implicit conversions are called weakly typed, but that's another very ill defined term. I like "stringly" typed.
I think we don't have great tools for helping with this sort of optimization. One can use perf to find cache misses, but that doesn't necessarily tell the whole story, as you might blame some other piece of code for causing a miss. Maybe I should try cachegrind again...
This is why, when they compare it to v8/etc, it's kind of funny. They all have the same curve.
Basically all of these things, all of them, end up with roughly the same deficiencies once you cherry pick the low hanging fruit, and then they stall out, and get replaced a few years later when someone decides thing X can't do the job, and they need to write a new one. None of them ever get to a truly good state.
Rinse, wash, repeat.
The only thing these things make real progress, is by doing what LLVM did - someone works on it for years.
Let me quote a former colleague at IBM - "there is no secret silver bullet to really good compilers, it's just a lot of long hard work". If you keep resetting that hard work every couple years, that seems ... silly.
TL;DR If you really believe they've totally gotten everywhere they need to be in 3 months, i've got a bridge to sell you
 For example, good loop vectorization and SLP vectorization is hard.
seriously though, dang, how many years of coding to get to that level of expertise
In addition to that, dedicated implementations can take various shortcuts to make their job easier - there are some nice examples given in the link. LuaJIT is example of a compiler project that benefits from being heavily specialised to a particular job, to remarkable effect.