Hacker News new | comments | ask | show | jobs | submit login
Why the New V8 Is So Damn Fast (nodesource.com)
336 points by okket 6 months ago | hide | past | web | favorite | 233 comments

I speculated recently that TypeScript might _de facto_ help V8 optimizations, and the post seems to confirm that. https://clipperhouse.com/does-typescript-make-for-more-perfo...

IMO The problem as it stands now is that when developing typescript you have a compiler running on your machine doing the typechecking. It will error out and simply not compile when you try to pass a string to a fn expecting a number.

What if there is a bug (or even a misconfiguration in your project)? Or typescript changes over time (and now there are different versions all needed to be supported)? Or what if the types from typescript (designed for programmers) are not always inline with V8 internal representation of objects?) Or what if people push typescript through V8 with compile time errors? (The compiler should have prevented this from happening - but you can pump whatever you want into V8, including things considered broken by typescript).

You only really get any improvement in runtime if your compiler guarantees anything (and bakes it into whatever it is compiling). As of now there are little guarantees with the javascript language (which is what comes out of your typescript compiler). AFAIK typescript is designed to help programmers find common bugs.

Not saying it's not possible, but more work is needed into bridging the two.

I quite disagree. Look at JVM performance, there’s been huge improvements over time and a Java program compiled 20 years ago will get the same improvements as one compiled today.

Not only that but you can expect to get better performance from the JIT than AOT because the JIT can perform optimisations which are impossible for the compiler.

The compiler can also perform optimizations that are impossible for a JIT because they're too expensive. JITs have to balance execution speed with compilation time. It's not so clear cut which approach in theoretically superior. I also don't know any languages where a JIT and an AOT compiler have gotten equivalent amounts of engineering hours so that we could have a fair comparison.

Graal can run in dynamic and aot compilation mode. If you feed profiles into aot from a test run you can get pretty close to dynamic compilation peak performance.

And if you don't the gap is about 20% iirc.

> you can expect to get better performance from the JIT than AOT

And you are likely to be disappointed. I think these claims have been made for the last 20 years, and Graal might be the first to deliver something general - all previous successes were very limited to small niches.

I like JITs and what they offer, but very consistently for me and almost everyone I know, in practice AOT works better (faster, consistent, no "heuristic changed in JIT version and now it's slow" surprises).

> Not only that but you can expect to get better performance from the JIT than AOT because the JIT can perform optimisations which are impossible for the compiler.

I can’t help myself, but the JVM performs at par with managed AOT languages (Go, Haskell, OCaml, etc) and strictly slower than the unmanaged AOT languages (C, C++, Rust). The JVM does outperform various AOT Java compilers, but that’s probably an artifact of the decades of optimizations that went into the JVM; the same investment in an AOT compiler would close the gap to within the margin of error. Anyway, sorry for my compulsive nitpicking; hopefully it was interesting.

Lots of people believe this but it's false.

JVM vs C++ depends very much on the code shapes and what the code is doing. C++ that's very heavy on virtual function calls can be faster written as Java. On the other hand if you use a lot of SIMD intrinsics and things like that, C++ can be a lot faster.

W.R.T. AOT vs JIT, as others are pointing out, Graal is a compiler that can execute in both modes, i.e. it's a comparable compiler. JITC is about a 20% win for relatively static languages like Java and can be massively larger (like 4000%+) for other languages like JavaScript or Ruby. In fact the nature of very dynamic languages like Ruby or Python or JavaScript mean there's little point trying to AOT compile them at all because there'd be almost no optimisations you could safely do ahead of time.

You make it sound like Java is neck-and-neck with C++ in general. There are definitely cases where naive Java is faster than naive C++, but those cases are infrequent. Java is generally in the ballpark of 1/2 C++ speed alongside Go and C# in the general case.

> W.R.T. AOT vs JIT, as others are pointing out, Graal is a compiler that can execute in both modes, i.e. it's a comparable compiler. JITC is about a 20% win for relatively static languages like Java and can be massively larger (like 4000%+) for other languages like JavaScript or Ruby. In fact the nature of very dynamic languages like Ruby or Python or JavaScript mean there's little point trying to AOT compile them at all because there'd be almost no optimisations you could safely do ahead of time.

It seems like you mistook my point for "AOT makes things fast and should be used everywhere" or "JIT is fundamentally slower than AOT"; my only point was that the JVM specifically isn't faster than AOT languages nor is it faster than AOT Java because of JIT specifically (which was how I interpreted jahewson's comment above). It sounds like I agree with you--Graal seems really promising and JIT is pretty great, especially for dynamic languages. :)

Well, C++ is such a flexible language that it's hard to say what general C++ code looks like. I'd say I'd expect Java to either match or even beat C++ for general "business code" which is pretty vague but is typified by lots of layers of abstraction, lots of data transformation, hash maps, string manipulation etc. Modern compilers like Graal are very good at analyzing and removing abstraction overhead. I'd expect C++ to stomp Java for any sort of numerics work, media codecs, 3D engines, things like that.

my only point was that the JVM specifically isn't faster than AOT languages nor is it faster than AOT Java because of JIT specifically

Hmm, but is there such a thing as an AOT language? You can interpret or JIT compile C and you can interpret, JIT or AOT compile Java too.

I think it's clearly the case that JITC Java is faster than AOT Java. It was maybe unclear for a long time but Graal and SubstrateVM let us compare now. You pay a big performance hit to use a VM-less AOT Java compile. Java is sort of a half-way house between a really dynamic language and a really static language.... it's got dynamic aspects and also static aspects.

> Hmm, but is there such a thing as an AOT language? You can interpret or JIT compile C and you can interpret, JIT or AOT compile Java too.

You're right that my language was imprecise, but my point stands. JVM does't make Java faster than popular AOT implementations of (for example) Rust or C++ or C, and JVM is almost certainly faster than AOT Java implementations because of decades of optimizations, not because JIT is inherently better than AOT.

> I think it's clearly the case that JITC Java is faster than AOT Java. It was maybe unclear for a long time but Graal and SubstrateVM let us compare now.

Graal is shaping up to be a real game changer, but it's unproven and it's exceptional among JIT implementations. If Graal is all that it's cracked up to be, I wouldn't be surprised if there is a time when all popular JIT implementations perform like Graal, but until such a time, JIT alone doesn't have any clear performance advantages over AOT in general.

> because the JIT can perform optimisations which are impossible for the compiler

Is there an example of at least one such optimization? As JIT is not magic, it also compiles things ahead of time, but a little bit ahead of time and it is very constrained on how much time and resources it can waste. It also can't afford to slow down interpreter too much, can't do excessive profiling, can't do big changes to the code that touch half of all the functions and data structures for example, etc. While AOT can do that and I can't think of anything JIT does that AOT can't do.

One such example is dynamic re-compilation in Java, with e.g. inlining or uninlining based on recent profiling. This would handle the case where a certain (large-ish) function is on a branch that's often taken for some stable amount of time (where inlining gets rid of method call overhead), then often not taken for other periods of time (where un-inlining gets rid of the instruction cache bloat).

Artificial example:

    while (true) {
        if (timeOfDay < NOON) {
            computePi(); //should be inlined before noon, function call after noon
        } else {
            computeE(); //inline after noon, function call before

@JIT vs. AOT: i used to think that, too. but graal came out and got 90% of the JIT speed right from the start with a fraction of the time hotspot had for optimization (and of course, with better startup times).

Yes, it seems Graal might finally deliver on the "JIT is better because it is better informed" promise after 20 years. I'm holding my fingers crossed, but I think we need a little more experience with it before victory can be declared.

uh, a misunderstanding, my mistake. what i meant was that graals native-image AOT compiler with the substrate VM achieves 90% of hotspot JIT execution speed pretty much since the project was launched. you may be right that graals JIT is still faster, i don't know about that.

What about luajit?

luajit is a great triumph for dynamic compilations and jits; and the fact that it compares favorably to C code (about 70-85% in some of my experiments) is utterly amazing. Especially given that it's essentially the work of one person.

And yet, regardless of how much typing info you give it, it loses out to good AOT compilers, for a similarly implemented algorithm.

GraalVM is unique in that it seems to do better than AOT in some cases. I'd wait a couple more years before declaring JIT victory, though.

The belief is that JITs have information, such as variable values and memory access patterns, that an AOT does not. Well, so far hardly any JIT uses that info, and AOTs do have them from through profile feedback; And also, JITs have to amortize compilation effort and runtime saving, whereas AOT doesn't. So the "JIT is obviously >= AOT" is not foregone conclusion.

I very much agree with you!

> You only really get any improvement in runtime if your compiler guarantees anything (and bakes it into whatever it is compiling).

I meant this in reference to runtime speed optimisations coming from types. V8 is improving every day!

Wouldn't it be good if JavaScript also starts supporting python like type annotations (completely optional, but unlike python used to guide the optimizer. Given that some non trivial amount of JS is already generated after type checking (typescript, flow, and all the statically typed languages compiling to JS), all this compile time type info could be made available to the optimizer.

For speed purposes? This post is for example already talking about how fast JS is getting without types.

I'd say it would be a lot easier to simply focus on the WebAssembly format, you can write your code in a number of (typed) languages and compile into wasm which is designed to be fast.

Javascript already comes with a few decades of bagage, and things are already changing really fast (ES6 and newer versions). Not everyone likes types and the people who do can use typescript (or flow). And the people who need insane performance (games, etc) can use WebAssembly.

Yes for speed purposes. And yes some people may not like types, that's why I said completely optional. Anyways the optimizer tries to figure out the types on it's own, so give it a non blank slate to start with. You are correct in noting that js is already fast enough, but the problem is the sort of performance gotchas(order of keys in objects) as mentioned in the post aren't really obvious during development time. Types may help level the unpredictability.

Python totally ignores them at runtime, PHP has this but it actually slows down the code a bit. It does make catching bugs easier.

That's basically what asm.js was, just less friendly to human readers.

asm.js put a type information to every single expression node, something you would never do even in the most statically typed languages.

Yeah, I've always wondered this too. It would be cool to be able to extract typings in a separate file, like source maps, and send over the wire along with the JavaScript bundle (for compatibility).

It would definitely help moving sections of the code up the tier ladder, not unlike Web Assembly.

I even predict that future typescript versions would include stricter typings optimized for an browser to solve problems like others have mentioned in the thread.

Or just a TypeScript engine in the browser, like ehm Dartium. I guess someone at Microsoft already did this on IE in a hackathon or something.

I'd much prefer that optional type annotations make it into the spec, like they did for Python and PHP.

Typescript being a superset of javascript does allow optional type annotations. RTFM.

i think they mean, make it into the JS spec.


I don't think so: TypeScript/JavaScript-Types are too coarse-grained for optimizations: For example number is a double in JS. For generating good machine code you really need to know whether the number is int or double. The same for string: Depending on the JS engine there are actually a lot of different string types.

Even if we consider that good enough or somehow solved, understanding type annotations only helps you reaching peak performance faster. It doesn't really make peak performance faster.

Also there are the PyPy and Dart devs that in theory could use type annotations, but both don't. For PyPy they even have an entry in the FAQ: http://doc.pypy.org/en/latest/faq.html#would-type-annotation...

I think the type annotations themselves don't do anything, but the typing strongly pushes developers towards monomorphic (or at least non-megamorphic) calls sites; and having proper-ish data types may also encourages consistent order of initialisation.

I don't really know about the details fo Pypy, but from what I gathered from performance talks about V8 between the hidden classes and inline caching it would strongly benefit from these behavioural changes.

Basically, it doesn't preclude the runtime from having to do its analysis, but it makes the analysis & optimisations "hit" at higher rates than they would otherwise do.

>For generating good machine code you really need to know whether the number is int or double

Do you? I remember reading that LJ is able to infer ints for e.g. loops with no specific effort (neither performance hit).

An int64 type is being added to Javascript in the near future, I believe.

Of course it will take a while to be supported everywhere, but they are taking steps in that direction at least.

As an internet user, I really enjoy reading about all these performance improvements in V8 / javascript.

As a ruby developer, I'm insanely jealous.

You might be interested in TruffleRuby: https://github.com/oracle/truffleruby "A high performance implementation of the Ruby programming language. Built on the GraalVM by Oracle Labs."

I researched this yesterday against Crystal. Seems people won’t touch it because of oracle. Which is a real shame. Truffle is a 9x improvement in speed over Ruby. But because Oracle is a patent nightmare and a dying organization. People aren’t interested.

Uh, how is oracle a dying organization?

I went to go look for evidence of oracle dying and actually came up empty handed. I was also under this impression. I think it has waned significantly in popularity in the developer community over the years, but their stock isn’t bad at all and they seem to have diversified quite a bit in the time they’ve existed.

Oracle doesn't even seem that patent happy these days. They seem to have got burned by their experience with the Java patents (all useless, case revolves now around copyright).

Financially Oracle is doing very well indeed.

There's also JRuby, Ruby running on the JVM. Which can be insanely fast for anything longer running than your typical CLI use case.

I believe Truffle was 3-4x+ faster than Jruby. This is what Ruby needs. But Oracle will ruin it.

headius is at a jvm conf right now talking about optimizing JRuby, and here's a recent video on the subject: https://www.youtube.com/watch?v=4vxIncIm2D4

Well, JRuby runs on the JVM which is developed by Oracle. Basically, if you're a ruby user who wants good performance there's not many other places to turn.

But you know Oracle acquired Java long ago and other than Google, I'm unaware of Oracle causing problems for any other users. And Google is a rather special case - they reimplemented an incompatible version of Java without licensing it. You're not going to be doing that.

It's open source, the community can always fork it and move forwards if they dislike what Oracle does with it.

Don't be too jealous. My main concern with Node and its ecosystem is maturity and security. It loses hands down on both those points compared Ruby / Rails especially in the area of dependencies.

This keeps me happily in Rubyland.

> security ... Rails

One bad default setting and [0]BOOM! [0]https://github.com/rails/rails/commit/b83965785db1eec019edf1...

You say "Ruby developer" as if you were born like that and stuck with it. There's an entire world out there!

If the alternative is JavaScript, I think most Ruby programmers are fine where they are ;)

I don't know. I'd say that both Ruby and JavaScript are pretty awesome these days.

It certainly is a lot better than it was. Though every time I'm allowed back at the backend, Ruby is just such a breath of fresh air.

Just use Express.js or Sails.js, they're 1:1 to Sinatra or Rails. Join us!

Unfortunately Sails is nowhere near as good as Rails. Which is kind of strange given the size of the JavaScript community, and the fact that PHP has Laravel and Python has Django, both of which are comparable to Rails.

Amusingly given all the PHP hate, if you want a faster rails-like framework, Laravel might be your best bet.

I don't know anything about Laravel but I've been working with Rails since 2006 and with Django since 2016. Django has little to do with Rails. It's much more similar to Java Struts from 2005 (when I left it for Rails), form objects and template tags among the other nuisances. No XML thanks god, but a weak deployment story (no Capistrano or Mina, I built my own tool). I'd pick Django over Struts without thinking (I don't do Java anymore, even with morr modern frameworks), but I pick Rails over Django any time customers give me the choice. Django (and Struts) are optimized for large projects at the cost of developer time but very few projects grow even to medium sized. Django has a decent admin tool. That's the only advantage I can think it has over Rails for the typical project I do. I also worked with Phoenix (Elixir) in the last 12 months. It's kind of midway between Rails and Django in terms of framework and language complexity.

Have you tried Sails 1.0? (There’s always more to do, but we’re working to improve the framework a little more every day.)

Awesome to see the creator of Sails here! And awesome that you have such a positive and constructive response to criticisms! I see that Sails is also a company and not just an open source project, are you a full time company living off of using your open source work? If so can you give any advice to someone who is trying to learn how to make a living with open source? I am full of energy and passion for software and want to put that towards open source, but it would be much nicer to also get paid while doing so!

Not true unfortunately. Rails quality unreachable for current js frameworks. Well, i hope situation will change in the future

Does that mean there's still room in the JS ecosystem for a Rails-alike? Maybe one that could become very popular? Because I am looking for a major open source project to create and spearhead, something that could get hundreds of thousands of active users and a thriving subcommunity, but I've been holding off until I find just the right project.

Personally, having developed in both Rails and Node, I now believe that trying to recreate Rails in JS is a foolish effort. Rails is a huge project with a ton of weight behind it and unless you can generate a massive amount of corporate investment, you won't ever be able to really catch up.

I'm not saying that there's no room in the JS ecosystem for another web framework, what I'm saying is that if your design goal is re-implementing Rails, you're already setting yourself up for failure. Sequelize has tried to be ActiveRecord for how long? The only thing it makes me do is want to go back to Ruby and Rails.

No, what you have to figure out if you want to do a JavaScript web framework is how to get me to actually want to use JavaScript to program a website. How can prototypical inheritance actually contribute to a workflow as opposed to a more conventional inheritance scheme.

But honestly, quite frankly, I can't tell what anyone would want to use JavaScript on the server at all for.

Apparently Steve Yegge ported much of Rails to JavaScript (on Rhino, at the time) back in 2007 at Google, but it was never open sourced AFAIK: https://steve-yegge.blogspot.com/2007/06/rhino-on-rails.html

For comparison, (according to `find . -name "*.rb" | xargs wc -l`) Rails back then was about 66K LOC (plus 38K LOC of tests) and is now 137K LOC (plus 216K LOC of tests).

However, Steve says he only ported "essentially all of ActionView, ActionController and Railties, plus a teeny bit of ActiveRecord", so maybe half of the total.

As for the second half of your comment, it sounds like you haven't used JavaScript in awhile. ES6 introduced syntax for classical inheritance, and a bunch of other nice features.

I use ES6 at my job, daily. There are some things about it that make me really hate it. First, no, it doesn't introduce classical inheritance. It introduced syntax that looks like classical inheritance. It's still prototypal inheritance under the hood. I'm not entirely sure what this means yet from the standpoint of building things with it, but I'm not enthused at the prospect.

What I find happens fairly frequently with ES6 that I never found with Vanilla JS is that the extra features built on top of JS 'break'. If you add syntax to a language, that syntax needs to work. I need to be able to rely on it working. I run into constant little issues that make me think I'm missing something about scoping, when really it's some under-the-hood issue with a library or something I just don't have the time to pin down.

One example is I tried using the spread operator to add keys to an object. But the spread simply, well, failed. Passing in the object worked fine. Passing in a spread version of the same object failed. I haven't resolved it yet, got pulled onto another feature. This sort of breakage is hard to Google, when I run into it again, and I'm sure I will, I may have to troubleshoot it all the way to Babel.

Error reporting in Node with ES6 is garbage. Worse than garbage, it's a veritable dumpster fire. Ideally the error points to the problem, when it doesn't point to the problem you have to rely on experience and intuition to lead you to the issue. Many many errors I come across in JS are of this sort.

I think most of these issues come down to the fact that ES6 is a transpiled language. This makes me long for the days of good old CoffeeScript. At least CS was close enough to Vanilla that it was easy to determine when you had an issue with the transpiler, simply grab the snippet of code and paste it into the online transpiler, look at the generated JS and work out your issue from there. It wasn't the smoothest workflow but it was effective.

ES6 is stupid in ways that make building an effective workflow unreasonably difficult. I can't wait to get back to Rails. Vanilla JS wasn't that bad. It was well-understood and you could work with it effectively on the front-end. It certainly wasn't as pleasant as Ruby, but it didn't feel like the pile of hacks that ES6 does. Maybe once it stops being transpiled it'll get better.

Indeed, JS does not have classical inheritance. IMO introducing a class syntax that looks so much like the classes from other languages and yet behaves differently was a mistake.

The rest of your comment sounds a bit misinformed to me. You seem to have decided that prototypical inheritance is somehow inherently worse than classical inheritance, but don't mention why that would be. I have found very little practical difference between the classes in JS and other languages in everyday use, and can't quite imagine what the problems with it might be.

Furthermore, ES6 (ES2015) is not a "transpiled language". I think it's obvious that if you take a very new language, say, ES2018 and want to run it on your toaster that doesn't have support for such new languages, you are going to have to do some kind of precompilation step. That is true for ES2018 today and it will be true for ES2019 next year and for ES2020 the year after that.

ES2015, however, has been around for several years. All the modern browsers (that is, all major browsers except IE) support it already. Node 10 even supports the new module loading syntax (behind a flag), or you can use a very lightweight transformer like esm[1] for older Node versions.

And if you do end up using and having problems with, say, Babel, it would be more constructive to give concrete examples of the issues you've had. I personally have never faced a syntax problem where the issue would've been due to a bug in Babel instead of just my incorrect understanding of the language feature.

[1]: https://github.com/standard-things/esm

I didn't intend to argue that prototypal inheritance was bad, just that I didn't relish the prospect of building something in it, in the context of a discussion about reinventing Rails in Javascript. The argument is that with the amount of time and effort that went into Rails, the new framework has to offer something a lot more unique than just "Rails in JS" if it wants to be relevant, because you'll never even get remotely close to the maturity of Rails.

The problem is that I can't give concrete examples because we're under the gun of a deadline and I can't afford to spend the time to troubleshoot down to root causes rather than just work around the problem and move onto another feature.

I'd love to be able to tell you why the spread operator didn't work in that case. But it didn't, and I made sure to get the whole team around me to tell me I wasn't being crazy. The syntax simply didn't create the needed semantics, and that means that something got messed up in the design of the language. I'm pointing to Babel because that's the only thing I can point to as a root cause.

Rails is nowhere even close to this level of broken. You can rely on the syntax and semantics of Ruby, sometimes gem authors play nasty games with metaprogramming, I saw an example where someone monkey-patched Symbol to get a more declarative method for describing SQL Where Conditions, but at least that crap wasn't in ActiveRecord.

Rails, as a stack, fits together and experience with the framework will allow you to trust it.

Syntax is the foundation of a programming language. If it doesn't work, if it doesn't produce precisely the input that's being described, your language is broken. We're not talking standard library here, we're talking about `{...object}` not being the same as `object`. I don't have time right now to dive into why, but that's the kind of shit I run into when I deal with ES6. When syntax breaks, you can't trust the language anymore. It's a pile of hacks and I wish it had never been invented. Coffeescript was better.

From what I remember, object spread came a year later and is ES7 (in case you need a starting point for research)

It sounds like you're looking for Ember. https://www.emberjs.com

Ember is a front-end framework, Rails is back-end.

Definitely. And using these technologies, there are some cool possibilities beyond what rails is capable of.

For instance if you used typescript (still allows devs to use plain JS), you could use types to check that forms send all the required params. For example, a login page might require:

    interface LoginParams {
      username: string;
      password: string;
      rememberMe?: boolean;
Having the typescript compiler check for inconsistencies with form data would save a lot of the troubles I've seen in rails apps.

Unfortunately TypeScript doesn't do runtime checks like this. It assumes that incoming data conforms to the type spec! However, you might be interested in the Rocket framework (written in Rust), which does do exactly that:


TypeScript can do runtime checks like this! Granted it uses a TypeScript framework (similar to AJV) that both does the runtime checks and preserves type information for your IDE and type checker. I wrote about it here:


Aw, man - I'd hoped you'd found something to make this less of a stone-cold pain to do. Instead, it's io-ts again. Which is...interesting...but it really sucks to actually use when you have to manually define every in/out yourself. It's 2018. There's no good reason TypeScript can't emit the type information to just build these (and I don't really mean decorator metadata).

You can add another build step right after the `tsc -w` compilation phase, which takes your .ts files, parses them to find our your types, and emits runtime code that checks your types. That's what you're describing that TypeScript should be doing on its own. But I don't see why you'd want to do this instead of using io-ts? This is more complicated overall.

Runtime type information is important. TypeScript is in the best place to provide it. So...it should. Boilerplating our way through io-ts is a waste of my time and yours.

been using io-to in my latest project, fantastic bit of code hats off to gcanti...Only thing is IntelliJ can’t keep up with the types generated (got an open bug with them)

There's definitely the need for this in JS. It would be extremely difficult to do, though.

With 2.6 JIT Ruby should be much quicker this fall. It'll be interesting to come back around once they have method inlining.

What Rails features do you miss from which framework?

How is Express even remotely close to Rails? Express is a glorified router.

I think the parent comment was comparing Express to Sinatra and Sails.js to Rails respectively, not altogether.

I believe parent's intention was to say that Express is like Sinatra and Sails is like Rails.

To which I agree with the Express / Sinatra similarity.

Express -> Sinatra

Sails -> Rails

> As an internet user, I really enjoy reading about all these performance improvements in V8 / javascript.

As an Internet user whos has been reading for years about how fast the various js engines are it annoys me that the web still feels slow :-]

What about jruby or oracle's java+ruby thing?

Replace Ruby with Python and I have the same feelings.

And here I am just happy I don't have to touch js.

I'm being downvoted because I can't be happy?

You’re probably being downvoted because your comment doesn’t contribute anything to the thread. And rightly so, unless you want HN to be as useless as /r/programming.

Your comment is also borderline rude, as it implies that anyone using JavaScript are unfortunate. I mean, what did you really want to achieve by posting that comment?

How my comment is different from this one - https://news.ycombinator.com/item?id=17650823 for example?

>Your comment is also borderline rude, as it implies that anyone using JavaScript are unfortunate

No, it does not, that's just your interpretation. I never used word 'fortunate' or anything similar. I'm happy not to work with it as in "I'm happy not to use public transport today, because I like walking". Surely this can't be rude to anyone except those people who see everything as way to offence them somehow. Inferiority complex maybe? No idea.

> I mean, what did you really want to achieve by posting that comment?

Just wanted to express myself. As it turns out that's a wrong thing to do unless you have some sort of clear goal in mind. Noted.

I would insanely choose ruby rather than javascript for the major advantages that has to offer. Performance in most of cases is not a real needed over consistency. Also Ruby 2.6 has jit, would be great to see some bench tests. :P

2.6’s JIT is entirely at the method level currently, and unfortunately for certain types of workloads (e.g. Rails) this means the performance benefit is largely outweighed by the call counters and deoptimization checks on every method invocation.

There’s definitely room for improvement, and the inclusion of JIT infrastructure is awesome, it’s just not making much of a performance impact. Yet.

The overheads from the call counters and deoptimization checks are tiny. The real problem is that right now MJIT doesn't allow much optimization beyond generating native code equivalent to simply executing the instructions. It's ridiculously simple compared to V8.

V8 is also a method-based JIT as far as I know.

Out of curiosity, what are the major advantages Ruby has to offer over JavaScript?

Isn't there a Ruby->JS transpiler?

Why can't V8 deduce that the order of keys doesn't matter? It's pretty nuts that just rearranging the keys from { x, y, z } to { y, x, z } would cause a slowdown.

I agree that this is unexpected for most of us, but if you read this section in my v8-perf repo (https://github.com/thlorenz/v8-perf/blob/master/data-types.m...) you'll understand better why that is.

However it doesn't necessarily cause a large slowdown, just makes your function polymorphic and the resulting optimized code larger and a bit slower.

To avoid this entirely I recommend using a JavaScript `class` when passing objects to a function that has to run at peak speed.

A factory function will also give the objects it returns the same hidden class, resulting in monomorphic code that can be optimized.

    function vec3 (x, y, z) { return {x, y, z} }

Why does it have to be that the order of members matters? shouldn't both {x,y,z} and {z,x,y} compile down to {x,y,z}?

In JS for-in loop enumerates keys in the insert order. So {x,y,z} is different from {z,x,y}

AFAIK, that's not true; the spec doesn't guarantee that it iterates in insertion order, it's just how it's commonly implemented in browsers.

Nope, they should be iterated in their defined order: https://tc39.github.io/ecma262/#sec-ordinaryownpropertykeys

I think they added this with ES2015.

You can't deduce that; you'd need to show that the object is never passed to for..in or Object.keys or similar to be able to avoid storing the insertion order.

They could change the representation to not be insertion order dependent, and store insertion order in a separate data structure, but that has its own trade-offs.

Does for..in actually require that keys be returned in a specific order? Most languages specifically call out that the order is not guaranteed (not that that stops developers from relying on an order)

The ES spec doesn't require it, but every implementation does insertion order (and insertion order was a deliberate decision by Brendan in the original implementation, IIRC), and the web very much relies on it.

The "new" generation of JS VMs (V8, Chakra, Carakan) all dropped insertion order for array index properties (that is properties whose name is a uint32), but kept it for everything else; that broke about as much as browsers are willing to break, and breaking the general case would be far, far worse.

Building a browser sometimes feels like building a language where the only guarantee is the worst features will be used and maintained.

I mean this not as a slight on the ecmascript creators/maintainers but rather as an observation of how difficult the problem is.

> Does for..in actually require that keys be returned in a specific order?

It's complicated. The spec says order of for..in is not defined, browsers all implement the same order, and there keep being attempts to get the spec changed to match.

Object.keys, however _does_ have a defined order. https://tc39.github.io/ecma262/#sec-object.keys calls https://tc39.github.io/ecma262/#sec-enumerableownpropertynam... which calls the [[OwnPropertyKeys]] internal method. For normal objects this is defined at https://tc39.github.io/ecma262/#sec-ordinary-object-internal... which calls https://tc39.github.io/ecma262/#sec-ordinaryownpropertykeys which completely specifies the order: integer indices in ascending numeric order, then other string-named props in order of property creation, then symbol-named props.

> you'd need to show that the object is never passed to for..in or Object.keys or similar, and unless you solve the halting problem, you can't do that.

This is pretty blatantly incorrect. Just because a problem is undecidable in general doesn't mean that there aren't specific cases where it can be solved. Optimizing compilers deal with undecidable problems all over the place, generally by either being conservative and lumping together proved-impossible with unable-to-be-proved invariants, or doing speculative optimizations with an unoptimized fallback. Just proving the type of a variable in Javascript is undecidable in the general case

You are, of course, correct.

My hypothesis (as someone who used to work on a JS VM) would be that locations of iterations over an object's keys and the creation of the object are almost always in different functions, with the object passed between them. And note that JS VMs rarely do cross-function optimization. As such, it would be exceptionally rare for the optimization to be applied.

My point was really that you can't always drop the ordering of properties, and the insertion order isn't something you can recreate after the time if you don't store in initially, hence you do really need to store it somehow.

(If anyone's confused, I edited the grandparent comment of this to better reflect the above.)

You don't need to prove it, you just need to guess it successfully. If you're wrong you can deoptimise and fix-up to the same base-line performance.

> If you're wrong you can deoptimise and fix-up to the same base-line performance.

but how do you know it's wrong (if the iteration order of a .keys() call isnt the same)?

The original code didn't have a call to .keys(), and that's the case where this should be optimized. If you do the unordered optimization, then encounter an operation that needs an ordered iteration, you'd have to fall back to an ordered representation (which could be slower than not doing the optimization at all)

That requires extra bookkeeping for ordering, but it isn't really that common where people will have object literals with the same properties initialized in a different order.

But a much more common case is that properties added after object creating will be added in an inconsistent order, for example:

o = {}; o.x = 1; o.y = 2; o.z = 3;

p = {}; p.y = 4; p.x = 5; p.z = 6;

When executing p.y = 4 you don't know that the object will eventually evolve to have the same properties as o.

That would surely be the second option? Treating all permutations the same means changing the object representation to keep ICs monomorphic across orderings.

Unless you forbid any impure operations you can't just deoptimise and re-execute code to then store the ordering. (I'm sure this isn't what you meant, and I'm probably being silly!)

Because it uses hidden classes to get read in O(1) for an object property access. The slowdown comes from the inline cache as explained in below article.


Keys are written to objects in the order to which they are assigned. Why would want to modify that order since it is irrespective of key access?

> In some extreme cases, developers had to write assembly code by hand for the four supported architectures.

Maybe I'm tainted from my experience building / debugging J9 / OpenJ9 but... Why is this considered extreme? If one strays one iota from the standard platform ABI, which is basically a necessity for producing a performant runtime (PICs being an easy example), this is where one ends up.

I used to believe that javascript was slow, C was fast and that there was no way to change this because of the way that javascript is interpreted and not compiled. But the more I've learnt about the benefits of "on the fly" / "just in time" optimising compilers, the more I'm convinced this is the future of computing. Being able to use multiple threads and hence multiple cores simultaneously to optimise what is actually a single threaded piece of code is quite amazing. And then being able to write code that is insanely portable and optimised on any platform is great.You can take advantage of SIMD, hyperthreading, multicore, large caches... without knowing if they're available ahead of time.

Sure, you won't beat C for low memory devices or hardware that you have complete control over but for 90% of use cases javascript actually makes sense and can be the most performant.

> javascript actually makes sense and can be the most performant.

Thanks to V8 you can definitely achieve good enough performance with JavaScript for most applications, and the ergonomics are pretty great. But it’s not true that performance rivals or beats C in the general case. The ‘sufficiently advanced compiler’ rainbow is something that Java has spent decades chasing. I think it’s fair to say at this point that humans are better at writing C code than we are at writing clever compilers.

It’s many of the exact ergonomics which make JavaScript useful and easy to use that also make it much harder to optimise. For example, what code should V8 generate for `x+=x` ? The optimiser might find that x is always a positive integer, but every time it gets doubled, it might cross the magic overflow point where it needs it’s representation swapped for a double. The generated code must check for this every time. Or consider memory management - For games I’ve heard of people making a per-frame arena allocation pools for values with a lifetime that won’t cross frame boundaries. This basically makes these allocations free and improves cache coherency. I’m happy that the JS GC is multithreaded now, but there’s no way a GC can compete with that. And unlike C, there’s no way in JS to override the allocation behaviour.

There are a million paper cuts like this which lower the ceiling of how fast optimised JS can run. I’m glad V8 will keep improving, but for my money the future of really fast JavaScript is Webassembly.

>For games I’ve heard of people making a per-frame arena allocation pools

Games are a special case where each ms counts. Globally, the number of LOC written for consumer or business software dwarfs that of game code and for these kinds of applications JS is well within the realm of performant enough. That assumes you're not doing your presentation layer using the DOM though which is still garbage wrt performance. Luckily there are things like Qt to make high performance UIs in though.

Yes I think we all agree. As I said in my post above:

>> Thanks to V8 you can definitely achieve good enough performance with JavaScript for most applications

The point I argued above isn’t that JS is slow. It’s that JS will probably always be slower than well written C code. I still write far more JS than C, because my time is usually more valuable than the computer’s.

That said, games are far from the only place where every ms counts. Performance matters in database servers, operating systems, real-time applications (eg robotics), text editor typing latency, UI responsiveness, 3d modelling software, video encoding, cryptocurrency mining, browser DOM rendering and so on.

I love JS, but native code is not a special case. Despite the best efforts of Electron, I suspect most of the globe’s aggregate clock cycles are spent running code written in languages other than JavaScript.

Thankfully the world of managed languages is not constrained to JavaScript, nor C is the only option for unmanaged ones, a language that we would consider no respectful games programmer would use, after all a good game engine should be written in Assembly.

Ironically it is now used as the language to beat.

I don’t use JS regularly because the ergonomics of async have been so poor. This might not be the case with recent async/await work, but I’ve also not had positive async/await experiences in other languages either. This is too bad because V8 seems like a dream compared to CPython or even Pypy, but I prefer Go to both, especially when it comes to I/O.

You make some interesting points about cases where JS will always be difficult to be optimized automatically.

I wonder if compiler hints can help in that regard? For example, if performance is important then you could annotate the 'x+=x' line to tell the compiler to not check for overflow.

We have that already. It's called asm.js and it's successor is WebAssembly.

Another option would be that the compiler deduces when exactly the statement x+=x would need to switch from integer to double. That's more difficult, and impossible in general, but not impossible in specific cases.

Sure, you can hint with OR: `x = x | 0; x = (x + x) | 0;`. Bitwise operators always work on 32 bit integers in js. I'm not sure whether the js vms actually need or use this information.

The reason most of the features of the JIT are needed is due to the lack of static typing in javascript. The most important thing the JIT does is figure out what types of things you are using and optimize for them.

There will always be some cost here, because at runtime you need to check if your assumptions still hold. Some of the time you can pull these checks out of loops, but sometimes you can't, and they can be costly.

There are a few things a JIT can theoretically do better. Inlining is the biggest one. For example, a JIT knows not to inline a function that never gets called, and it can also do inlining for indirect calls based on profile data. A JIT could also theoretically move unexecuted blocks out of line for better instruction cache locality. It could also check for aliasing at run time (with a bailout) and then optimize a loop assuming no aliasing.

Unfortunately for JITs, AOT compilers can do a pretty good simulation of most of these things with PGO, that is running a scenario with an instrumented binary and re-optimizing using profile data.

Still can't bailout if an assumption is wrong, so it doesn't work for dynamic languages, but you don't really need bailout for much in static languages.

In my group at Oracle we're experimenting with running C using the same just-in-time compilation techniques that JavaScript uses, and sometimes we see it running faster than ahead-of-time native compilation, due to the effects of things like inline caching and profiling.

Reports of some jitted Java code running faster than C (after warmup) are old. I remember seeing those claims for some super hot VM in the old IBM System's journal from 2000, and yes, IIRC that was a VM written mostly in Java.

However, those isolated examples rarely if ever translated to high performance in real-world projects. The same issue has an experience report of the travails of IBM's San Francisco project. Performance was a huge issue that to the best of my knowledge they never fully resolved.

More in my article "Jitterdämmerung": http://blog.metaobject.com/2015/10/jitterdammerung.html

Heh ... one of my employees took it into their head to code up some arithmetic algorithms in C++ a month or so ago. We do not use C++ for anything, we are all Python and JVM based. But he decided that he was going to achieve an amazing win by optimising some numerical code to get order of magnitude benefits, and without asking invested 4 hours into coding it up. I wrote a naive implementation of the same thing in Groovy, of all languages. My implementation was initially 20 times as fast and I coded it in 30 minutes.

So he debugged some more and figured out that he misunderstood some of the inner workings of how vectors copy data and also that he did not understand the threading library he was using properly. He then fixed those two things. After this further exercise he reduced the difference to factor of 4. However he was never able to work out why my code was 4 times as fast as his C++ and abandoned it.

I know for sure that with appropriate expertise the C++ could probably be made to go perhaps twice as fast as my Groovy code. But the point is, none of the supposed benefits come automatically regardless what language you are using. And unless you flip over to GPU or FPGA accelerated methods, the final outcome is well and truly in the same ballpark anyway.

But all this is to say that "rarely translated" might be true at the for applications that are completely in the high performance domain. But for all the applications where the high performance code is in niches at the edge and there simply aren't resources or expertise to fully tune the native implementation ... I think it's translated all the time.

In my experience, writing java (or groovy here) in c++ results in horribly slow code which the jvm runs circles around, and it sounds like that's the problem your employee ran into.

> But for all the applications where the high performance code is in niches at the edge and there simply aren't resources or expertise to fully tune the native implementation

It's interesting you say this, because in my experience it's the JVM which requires absurd amounts of tuning and native programs which are much more consistent. The proper and easier way that native programs are written lends itself to fairly respectable performance, mostly because the object and stack model of say C or C++ is so much friendlier to the CPU than in most dynamic languages.

In general, for all that I hear statements along this line, I've only twice seen code to back it up, and the C was so de-optimized from the OCaml version that I suspect it was intentional - the author (same for each) was a consultant for functional languages, and in one case switched the C inner loop to use indirect calls for every iteration and in the other switched the hash function between the C and functional comparison.

In addition, a lot of the techniques used to write high-performance Java boil down to "write it like C". Avoid interfaces, avoid polymorphic virtual calls (as you can't avoid virtuals entirely), avoid complex object graphs, avoid allocating as much as possible...it's not nearly as nice as naive Java. Still nicer than C IMO. If your process segfaults you can know for certain that it's a platform bug.

The other thing that makes Java nicer than C is the ease and depth with which you can profile it to discover where the bottlenecks actually are. While it's certainly possible to profile in both cases, the runtime reflective and instrumentation capabilities of the JVM really add a lot of power to it.

There's this classic paper from Google that runs an optimisation competition on the same program written in C++, Java, Scala and Go:


This is great benchmark of the fundamental problems with say Java - the code itself is fairly simple and the JITs probably generate optimal code given their constraints, but the performance problems clearly show that the GC and pointer chasing really hinder your performance.

If you add in cases where simd, software prefetching, or memory access batching help, the difference will only grow.

It’s not native vs VM, but rather “has stack semantics/value types” vs “no stack semantics/value types”. In particular, OCaml’s standard implementation is a native, not VM.

Also worth calling out Go, which is rather unique in that it has stack semantics but it also has a garbage collector, so it’s kind of the best of both worlds in terms of ease of writing correct, performant code.

Go is not rather unique in having GC and stack semantics, there are plenty of languages that have it, all the way back to Mesa/Cedar and CLU.

I should have been more clear I guess; I was comparing it to other popular languages. Few have value types and many that do (like C#) regard them as second-class citizens.

But go has an imprecise GC (in reference implementation) or stack maps (in gccgo), so the GC overhead is rather huge. It also lacks of compaction, so cache misses are not that good too.

Not sure what you mean by imprecise, but Go’s GC does trade throughput for latency. The overhead still isn’t huge if only because there is so much less garbage than in other GC languages. I’m also surprised by your cache misses claim; Go has value types which are used extensively in idiomatic code so generally the cache properties seem quite good—maybe my experience is abnormal?

>Not sure what you mean by imprecise

It's a rigid term:


perf shows how much time does GC eat, and that's quite a lot. Thus in the majority of benchmarks go lags behind java or on par with it at best.

>there is so much less garbage than in other GC languages

That is not true since strings and interfaces are heap allocated thus the only stack allocated objects are numbers and very simple structs (i.e. which contains only numbers), so you would have a lot of garbage unless you are doing a number crunching, which could be easily optimized by inlining and register allocation anyway.

> It's a rigid term

Ah, neat! I learned something. :)

You’re mistaken about only numbers and simple structs being stack allocated. All structs are stack allocated unless they escape, regardless of their contents. Further, arrays and constant-sized slices may also be stack allocated. I’m also pretty sure interfaces are only heap allocated if they escape; in other words, if you put a value in an interface and it doesn’t escape, there shouldn’t be an allocation at all.

Both arrays and interfaces are heap allocated. Slice is just a pointer to a heap allocated array.

Structure could be stack allocated, but any of it's fields would not if there is anything but a number.

A trivial example:


    func main() {
            x := 42

    ./main.go:7: x escapes to heap
So a trivial interface cast leads to allocation.

Looks like you're right about interfaces (full benchmark source code: https://gist.github.com/weberc2/87d2fdc379065a2765d1c9f490ad...)!

    BenchmarkEscapeInterface-4        50000000   33.3 ns/op  8 B/op  1 allocs/op
    BenchmarkEscapeConcreteValue-4    200000000  9.45 ns/op  0 B/op  0 allocs/op
    BenchmarkEscapeConcretePointer-4  100000000  10.0 ns/op  0 B/op  0 allocs/op
But arrays are stack allocated:

    BenchmarkEscapeArray-4  50000000   21.3 ns/op  0 B/op  0 allocs/op
And structs are stack allocated, as are their fields--even fields that are structs, slices, and strings!:

    BenchmarkEscapeStruct-4  100000000  12.8 ns/op  0 B/op  0 allocs/op

The code:

    type Inner struct {
    	Slice  []int
    	String string
    	Int    int
    type Struct struct {
    	Int    int
    	String string
    	Nested Inner
    func (s Struct) AddThings() int {
    	return s.Int + len(s.String) + len(s.Nested.Slice) + len(s.Nested.String) +
    func BenchmarkEscapeStruct(b *testing.B) {
    	for i := 0; i < b.N; i++ {
    		s := Struct{
    			Int:    42,
    			String: "Hello",
    			Nested: Inner{
    				Slice:  []int{0, 1, 2},
    				String: "World!",
    				Int:    42,
    		_ = s.AddThings()

I'm sure your strings are not stack allocated, they are statically allocated (and would be statically alocated in any language). Not sure about arrays, but dynamic arrays should be dynamically allocated do, your arrays are static probably. They would be heap allocated, if you would use make.

It doesn't matter whether they're stack allocated or statically allocated; neither is garbage, contrary to the original claim ("Go generates a lot of garbage except when dealing with numeric code"). The subsequent supporting claims ("structs with non-numeric members are heap-allocated", "struct fields that are not numbers are heap allocated", etc) were false (sometimes non-numeric members are heap allocated, but they're often not allocated and never because they're non-numeric and their container is never heap allocated on the basis of the location of the member data).

I think this matter is sufficiently resolved. Go trades GC throughput for latency and it doesn't need compaction to get good cache properties because it generates much less garbage than traditional GC-based language implementations.

>It doesn't matter whether they're stack allocated or statically allocated

It does. Any language could do static allocation, go is not different from java here, the problem is that in any real code nearly all your strings and arrays would be dynamic, thus heap allocated, as well as interfaces. Consider also that allocations in Go are much more expensive than in java or haskell.

We're talking past each other. My claim was that Go doesn't need compaction as badly as other languages because it generates less garbage. You're refuting that with "yeah, well it still generates some garbage!". Yes, strings and arrays will often be dynamic in practice, but an array of structs in Go is 1 allocation (at most); in other many other languages it would be N allocations.

> Consider also that allocations in Go are much more expensive than in java or haskell.

This is true, but unrelated to cache performance, and it's also not a big deal for the same reason--allocations are rarer in Go.


Consider `[]struct{nested []struct{i int}}`. In Go, this is at most 1 allocation for the outer array and one allocation for each nested array. In Python, C#, Haskell, etc, that's something like one allocation for the outer array, one allocation for each object in the array, one allocation for each nested array in each object, and one allocation for each object in each nested array. This is what I mean when I say Go generates less garbage.

>Consider `[]struct{nested []struct{i int}}`.

A typical example, yeah. I've said about structs of ints already, it's not a common type unfortunately anywhere beyond number crunching, in which go sucks anyway.

In haskell you could have unboxed array with unboxed records. Check Vector.Unboxed.

> I've said about structs of ints already

Yeah, but you were wrong (you said other kinds of structs would escape to the heap). The innermost struct could have a string member and a `*HeapData` member; it wouldn't matter. The difference in number of allocations between Go and others would remain the same. The difference isn't driven by the leaves, it's driven by number of nodes in the object graph; the deeper or wider the tree, the better Go performs relative to other GC languages.

> In haskell you could have unboxed array with unboxed records. Check Vector.Unboxed.

For sure, but in Go "unboxed" is the default (i.e., common, idiomatic); in Haskell it's an optimization.

Regarding your last point, Crystal has the same features as go in that regard, while at the same time being vastly more expressive. This mostly due to the standard library in Crystal being so nice for work with collections (which perhaps isn't surprising as the APIs are heavily influenced by Ruby). Blocks being overhead free is another necessary part for this to work well.

Yeah, I often find myself wishing Go's type system were a bit better, but the reason I prefer it is because it's fast, easy to reason about, and the tooling/deployment stories are generally awesome (not always though--e.g., package management). So far I'm only nominally familiar with Crystal; I'll have to look into it sometime.

.NET is another example of value types in a garbage collected language. It’s also somewhat unique afaik in doing so within a VM.

Definitely. I’m sad that they’re not more idiomatic in C#. I definitely prefer values and references over OOP class objects.

This is exactly it: dynamic languages give you OK ish performance and fast development speed. Fast C++ code requires a lot of expertise, this kind of expertise is expensive and there are diminishing returns too. I don’t know anything about expertise of your colleague in C++ but given that their first optimization was to eliminate some redundant copying, I suspect there is some more room for improvement. After unnecessary copying is removed it usually boils down to things like cache locality, better memory allocation discipline, data alignment, and sometimes knowledge of a better algorithm applicable to a particular situation (eg radix sort or perfect hashing or tries), judicial use of multi threading (in form of OpenMP), understanding whether single precision floating point is good enough etc etc.

I don’t think this is a property of dynamic languages. This groovy example is almost certainly the best case for the JVM (arithmetic, few allocations or polymorphism, etc). In other words, probably not taking advantage of the dynamism.

Dynamic languages can be fast by being well designed and simple (Wren) or highly optimized (JS) or both (LuaJIT). There’s also the experimental GraalVM, but this is definitely the exception and not the rule.

Writing performant C++ is actually not hard if you have rudimentary C++ knowledge. That said, rudimentary C++ knowledge is a lot more expensive than rudimentary knowledge of other languages. But the options aren’t just dynamic languages vs C++; there’s a ton of middle ground with VM languages like Java and C# and native languages like Go. The first two aren’t much harder than dynamic languages and I find Go easier than any dynamic language I’ve used to date (I’m a professional python developer). But all of these languages are on the order of half of C’s performance and 100X the speed of CPython or Ruby and 10X of JS.

I don’t think I’m following your point. That C# isn’t a VM language because there exist AOT compilers? That’s fine; my point is unrelated to the VM/interpreter/AOT taxonomy—just that dynamic languages aren’t particularly performant. I’m happy to concede the “C# is/isn’t a VM (even though 99% of production deployments are VMs)” point if it matters to you.

Oh blimey, four hours...

Are you going to let your employees get away with investing all of 4 hours into things you did not give them permission for? You haven't shown them who the alpha is until they ask you for permission to go to the bathroom.

From the late 90s until 2006, I believed in the sufficiently smart JIT. There were a few benchmarks showing corner cases of Java out-performing C/C++. I followed the research. I was particularly excited by Michael Franz's SafeTSA research on more JIT-optimized program representations. Surely we were just a few years from Java generally out-performing C/C++! I worked on the most popular Java desktop app.

Then I moved to Google, working on the indexing system and properly learned C++. I've since worked on several projects for several companies where the annual cost for compute time is easily 200 times my salary. These are systems where it pays to extensively profile both CPU and I/O and retain C++ optimization experts.

I still see the appeal of platform-independent binaries and dynamic optimization, but you really need the startup time and optimization cost advantages of ahead-of-time compilation. Hopefully the platform independence and dynamic optimization features would also be decoupled from the garbage collector.

HP Research had project dynamo that was basically a tracing JIT emulator for PARisc CPUs that ran on PARisc. It showed that binaries compiled with -O2 could get performace comparable to -O4 through dynamic recompilation.

Ideally, we'd distribute programs in a compact and compressed CPU-independent SSA representation similar to SafeTSA or LLVM bitcode. Installation would AoT-compile the binaries and keep around the SSA form for use by an HP Dynamo-like runtime optimizer. The AoT could compile functions and methods to lists strait-line extended basic blocks with a call to a trampoline loop or other techniques for lightweight instrumentation/tracing of native code. The dynamic optimizer wouldn't need to work with arbitrary machine code, only AoT compiler output. Also, it would never have to disassemble native code, but could always start by gluing together the SSA for each extended basic block in the trace.

A few CPU features could make native tracing and dynamic recompilation extremely light weight. Unfortunately, Intel and ARM are disincentivised to do so, as it would make it easier to migrate off of their intellectual property.

> sometimes we see it running faster than ahead-of-time native compilation

Edge cases. But obviously it's not comparable to something ahead-of-time compiled and hand profiled with PGO and stuff. Although in theory you could put that through JIT too, but it would probably just add overhead and only slow things down.

I suppose you're talking about sulong? https://github.com/graalvm/sulong

Pretty cool that it can run faster than AOT compiled C, any benchmarks or posts with more details on this?

Not surprising. HP did this with PA-RISC binaries, back in the 1990's: http://www.hpl.hp.com/techreports/1999/HPL-1999-78.html

They found that most programs ran slightly faster in the JIT. I'm still not sure why this technique didn't catch on.

> sometimes we see it running faster than ahead-of-time native compilation, due to the effects of things like inline caching and profiling.

which is what java touts as being able to do. Unfortunately, i think this sort of speedup is really dependent on the application (and the developer using common idioms that is recognized by the JIT).

The difference in speed has a lot more to do with memory layout. Javascript will never be as fast for this reason.

You could make a JIT language as fast as C so long as it supported value types properly. Most programs in the "slow" languages spend most of their time chasing pointers around.

To elaborate on your point, its not just chasing pointers, it’s that those pointers point into random heap locations, each of which needs to be allocated individually and garbage collected. Also, because of the haphazard locations of these objects in the heap, you have way more cache misses than you would with value types.

Julia comes fairly close.

And Julia's tuples, named tuples, and structs are value-types without pointers.

Try profiling a few Javascript applications vs equivalent C applications, and you'll probably become significantly less sanguine.

I agree, though you'd be surprised at the difference in attitude between developers in each language.

I think JS is now within a factor of five of C, but JS programmers are significantly more blase about performance. In many cases they won't blink at a quadratic algorithm when a linear or nlogn one is possible, provided the quadratic algorithm is cleaner.

Ditto constant factors -- a C programmer might go out of their way to not iterate over a string more times than necessary, and a JS programmer won't. A C++ programmer might try to minimise lookups into a map, and a JS programmer won't. Some JS shops strongly encourage `array.forEach`, which is 6-7 times slower (on my machine) than an "old-school" `for()` loop.

For most web apps it makes no difference, but if performance ever does become a problem it can make it harder to fix. In my experience, profiles in "fast" languages in codebases written in the traditional paranoid style tend to be "lumpier" than profiles in "slow" languages written in the fashionable cavalier mode.

None of this is to judge anyone. I think being lazy and implementing an O(n^2) algorithm is perfectly reasonable when you know n is always going to be small, but I am cautious -- as much as the increased productivity is fun and addictive (and real in my day-to-day work), I've also seen the death by a thousand cuts first hand.

Welcome to 1995 and the wonderful world of Java. Except in Java you can use threads to use multiprocessing, so you can have efficient shared data structures but also hard to debug bugs.

Don’t forget 1987 and the Self language from which Java got Hotspot. Fascinating language and runtime: http://www.selflanguage.org

C++ also does this. A variety of projects, such as MXNet, use JIT compilation. Native compilation gets you far, but there are times where dynamic compilation gets you farther.

I would argue that in 95 was recently the first jdk 1.0 release, it was until 2004 which came java 5 with java.util.concurrent which was such huge improvement for that time, since then is easier to write concurrent software, and since java 8 is a wonderful experience!

What's funny to me is how JavaScript has gone from something of a joke to a very well regarded, performant, and heavily used programming language. I can clearly remember the days when proposing any significant programming in JavaScript would get a developer laughed out of the room. Is there another example of a language that has had such a 180 degree turn around in respect and popularity?

JS had the advantage of being the default language of the web current younger generation of programmers grew up with, and then tons of resources were thrown at it.

I'm sure Python, Ruby and PHP would benefit just as much from such attention, although I guess Python has had that in scientific computing, and PHP with recent 7 version and FB.

> respect

Not much respect, but more acceptance as a unavoidable language to use that thanks of millons of $$$$/human-hours of effort to polish it... is ok-ish.

You don’t achieve acceptance without a level of respect that comes from being useful. JS is really one of the most useful languages today.

You can argue whether everything moving to web apps (and I may be using the term web apps wrong, but I’m speaking about anything that runs in a browser) is such a good idea, it’s probably not, but it’s whats happening because it’s really fucking awesome in terms of getting things to actually work across multiple devices.

JS does something none of the other tech stacks does, in that it integrates really well with your current stack. I work in enterprise, we operate more than 500 different it systems, and most of them run on windows. So natural,y our operations ninjas are really great at Microsoft tech. Operating ruby, PHP, python and so on can certainly be done within the windows environment, but we’ve naturally been doing .net since it fits in better.

JS fits into this infrastructure perfectly. A lot of it is thanks to MS, but we haven’t had to spend money on retraining staff to integrate modern JS into our stack.

The language itself offers a lot of freedom, which includes risk, but I don’t personally think it’s worsening that regard than python.

> You don’t achieve acceptance without a level of respect that comes from being useful

But this is not the case of JS. JS is used because is the only posibility on web. Imagine if cobol was the only posible language to code on any desktop, and any other MUST transpile to it.

Just by force of necessity, Cobol will get nicer. But not because is respected, is because WHAT OTHER OPTION YOU HAVE?

JS eventually migrate to servers and get easier to integrate? Well that is the same with Lua and others that are even nicer.

But anyway. JS win for market brute force.

I'd say it's no worse than Ruby or Python. Async-everything is what makes it better, imo. No need for Twisted/EventMachine fragmentation.

This is how people end up using it on the back-end when they do have a choice.

Though "respect" is a corny word to use for any language. I don't respect my screwdriver either.

Modern JS is OK and TS is pretty nice

It's just tolerated, not respected, including by many JS developers.

It's not that performant either, it's just that it's getting closer to Java-class performance which is enough for many types of workloads.

JS is living proof that throwing enough money at a problem can go quite a long way.

I feel everyone ping-pongs between these ideas; I've been there and I'm no longer convinced the way you are. You can't take single threaded code and just make it multithreaded no matter how much time the compiler spends on it. And in a JIT environment, the compiler doesn't actually get much time to spend optimizing code.

You can't just take advantage of SIMD, hyperthreading, multicore, or large caches without writing code in a way that takes advantage of that even at a high level. Some things can be faster when compiled at runtime but everything is a trade off.

But ultimately I believe that C is fast mostly due to manual memory management but that's not a trade-off I'd take for most tasks.

I think his point is that if you write normal single threaded code then your other cores are going to be idle. But if you write in a high level JITed GCd language then the other cores will be busy making your main thread run faster by profiling it, recompiling it with risky optimisations, clearing garbage asynchronously and so on.

What do you mean by "multiple cores to optimise a single threaded piece of code?" This makes it sound like code can automatically become multithreaded which is not true (threading is one of JS's great weak points).

v8 is an amazing piece of engineering but it's not at the point where it allows application developers to take much advantage of SIMD, hyperthreading, or multicore.

What do you mean by "multiple cores to optimise a single threaded piece of code?"

I think he means this is done behind the scenes without your knowledge (some compiler/runtime magic), not some interface you can take advantage of.

SIMD, multicore, and caches don't just magically happen with better compilers. SIMD requires very specific memory access and computation patterns, and cache-friendly code has similar restrictions. The features of even basic javascript fly in the face of code being simd and cache friendly except for the most trivial programs.

Automagically parallelizing general serial code is something that isn't feasible on any hardware similar to modern cpus and probably will never mesh well with fast single-threaded performance (communication and synchronization in hardware is HARD)

I used to be a Smalltalk developer on Windows 3.1 for a brief time in the early 90's. When I found out that internally, Smalltalk was compiled into a sort of intermediate assembly language, and run on a virtual machine, I wondered why couldn't we just distribute the assembly language part over the Internet, where it could be run on any kind of machine with an interepreter. Everyone told me I was crazy, it was impractical, it would never work.

I used to work for a Smalltalk vendor. It was even better than that. For awhile we had the hottest JIT virtual machine. Smalltalks that followed the original design ran bit identically across multiple platforms. At one point, Squeak Smalltalk was running bit identically across 50 combinations of CPU and OS, and Squeak and its descendants probably can run bit identically across several times that number of environments now.

But it gets even crazier. (Non-Smalltalk) In the 90's, there was an OS entirely written in a virtual Instruction Set (TAOS) which was JIT assembled into real machine code as fast as it could be read off of disk and ran at 80-90% of native speeds. This OS could be ported by simply porting the assembler, which typically took about 3 days.

Back to Smalltalk craziness. There were also in-house research versions of Smalltalk that could prune their images as small as 45k, and were suitable for creating command line tool executables. As it was, VisualWorks, if you turned off things like the splash screen, actually could start faster than the Perl runtime in the late 90's, though you'd be hard pressed to create an image below 500kB. (Even getting it below a megabyte was an incredible feat.)

The tech industry could be way ahead of where it is now, if only everyone were like early adopters. The thing is, most people are quite different.

Wow that sounds super, super cool!

As you say, I really wish tech had followed that route, think of all the cool things we'd have.

Picture it. Now, go and build it on top of what we have today!

Juice was close to that:


It's one of those alternate histories for Web development I wish took off.

You are crazy, it is impractical, it will never work. /s

You also can't beat C for energy efficiency. That JIT consumes energy.

Keep in mind that a lot of performance differences come out of the snowball effect of popularity driving product development. X86 being faster than MIPS today has nothing to do with the relative merits of the ISAs and everything to do with the millions of people who wanted to run X86 code really fast for the past 25 years. JavaScript's foray into this scale of popularity is pretty recent, with the widespread adoption of "Web 2.0" and AJAX only dating back to about 2005 (both are older, but spent a while in the throes of obscurity and incompatibility).

I admit the performance is improving.

But it's not c level performance. Throwing multiple cores at js to get c level performance glossession over an important detail: in C, those cores are free to do other things.

JITs aren't a panacea for everything. As the blog post said, if you want to avoid falling off a performance cliff you must write "CranskshaftScript" instead of Javascript. And the only way to be sure that the optimizer is content through a specialized profiler.

> but for 90% of use cases javascript actually makes sense and can be the most performant.

This is a weird definition of performant, using more resources to achieve the same result faster, it seems akin to saying JS is more performant than C if you buy a newer CPU to run the code on.

> But the more I've learnt about the benefits of "on the fly" / "just in time" optimising compilers, the more I'm convinced this is the future of computing

It's been the future of computing for 30 years, possibly longer.

A couple of the replies here seem to read that you've said something like "JIT magically makes single-threaded code multi-threaded." I'm a little mystified by the reading comprehension on HN...

You can write a multi-threaded optimizer that works on single-threaded code. Using multiple threads on multiple cores to optimize single-threaded code ... is pretty darn cool, but barbegal never said it made the single- into multi- ...

> take advantage of SIMD, hyperthreading, multicore, large caches... without knowing if they're available ahead of time

You can't with Javascript. You need concurrency-friendly programming model and code structure for compiler to be able to do anything that fancy.

It’s not really the future. It’s the literal now.

The key being that the code you're writing needs to be performant. If you write O(n!) code, it's going to be slow whether it's C or JavaScript.

Algorithms take you far. Optimized code takes you the rest of the way. If you’re after performance you’ll do both, of course. I’ve found that the only subfield of which I know in which mathematicians still write their own C or C++ is concise/efficient data structures and algorithms.

For some reason I was expecting an internal combustion engine.

same ha ha. Interested either way

You have social icons floating over he text.

I have often wondered if frameworks like React which offer polymorphic methods (eg React.createElement) cause deoptimizations. The common functions also seem to be on the critical path since they will be called 100s-1000s of times per render.

I'm not sure about React, but a fairly common pattern is to dispatch to optimize. The parent function takes several different data types, but it is a very small pass-through that calls a variety of monomorphic functions to do the heavy lifting. It's not technically as fast, but as the amount of processing in the monomorphic functions increases, the cost of the small polymorphic function fades into the noise.

That's a good practical tip

Does anyone used Chakara core? How does it compares itself with V8?

Catfished. I thought this was going to be about cars.

Why hasn't node replaced PHP yet?

Hasn’t it? It’s the most utilized language on the web.

If you’re asking why PHP hasn’t died...Well, because it’s still useful. In the real world you don’t change your tech just because something better comes along because it’s tremendously expensive to do so.

That’s why cobol is still around, and if you want an example of something even worse, it’s also why you still find companies using classic ASP and so on.

And PHP isn’t really that bad. Parable is tremendously productive.

I don't like Javascript as a language. Everything from OOP to imports feel hacked in rather than having been supported by the language itself. One can't even add 0.1 and 0.2 together and get what you'd expect. Variable definitions are global scoped by default, semicolons can randomly be omitted, etc. There are just so many things about the language that feel hacky or unpolished.

So it's fast these days... great? There's more that I seek for in a language than just speed. PHP7 and Pypy also do a pretty good job, and one can always offload the heavy lifting to some compiled code if it's really that important to squeeze the last few percent of performance out.

The only thing unique to JS (afaik) that I really like is addressing dictionary keys as object properties (`a={'test':512}; console.log(a.test);` instead of having to do `console.log(a['test']);`). I'm not sure why that is so oddly satisfying, but it is.

> One can't even add 0.1 and 0.2 together and get what you'd expect.

That's floating point math, and persists in many languages [1].

> Variable definitions are global scoped by default, semicolons can randomly be omitted, etc.

Nothing random about it, automatic semicolon insertion is well defined/documented [2]. Also let [3] is your scoping friend.

[1] https://0.30000000000000004.com/

[2] https://www.ecma-international.org/ecma-262/7.0/index.html#s...

[3] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

> Nothing random about it, automatic semicolon insertion is well defined/documented

Figured someone would say that. Yes, of course it's not random: it's a computer, it follows the same instructions every time.

What I meant, of course, is that it isn't logical. You can't guess it without having to check the spec. In bash you know that a newline is as good as a semicolon, in C you know you always need a semicolon, and in Python you know you never need it. In JS it's somewhere in between. Reading that spec, it's when the next line would have syntax that is illegal if it were a continuation... so you have to do JS simulation in your head to check if your code could, perhaps, make sense as a line continuation. So I guess you just always have to do it. I'm not saying it's a major issue and this is the one reason I don't use it, but similar to how people criticize PHP for having inconsistent function names (which is also not a bug or broken), it would have been nice if it weren't the case.

> let [3] is your scoping friend

It's not about it being possible, it's that global scoped default is just asking for abuse of the global scope. Even PHP doesn't do that, and PHP is really made for quick and dirty web development (or at least it used to be, so it supports many things that make that possible).

You're totally safe unless you start line with parentheses, bracket or operator so it's not that hard - https://standardjs.com/rules.html#semicolons

const/let is the way of doing javascript, setting global will raise an error (click "run with js" http://jsbin.com/hogegikipo/edit?html,console,output). This is true for ES5 strict mode, ES2015 modules and anything you will transpile from ES2015 modules.

Speaking for myself, I have 20 years of C like language experience and I can't stand the scoping rules or functional nature of javascript. This is why I like PHP. It comes natural when switching between C, bourne shell, perl and PHP. Throw in the ever changing frameworks a casual user encounters and I want to avoid javascript at all costs.

Mmmm, I have a similar background (the C was long ago, the PHP not so long) but love the scoping rules and functional nature. I think that the C background (declaring variables up the top of a function) helped avoid some of the JS pitfalls, and I find the block scoping in JS to be convenient at times, whereas PHP's leaks things in ways that can bite you sometimes (e.g. a for loop in PHP would leak both the iterator variables and variables defined within it, while they wouldn't in JS).

The functional stuff was weird for a while but just keeps growing on me.

Inertia. It would cost trillions to move all the PHP to JS.

In addition, while you'd have a hard time paying me to code PHP, a huge percentage of PHP devs like the language.

What's the wordpress/drupal/laravel equivalents in Node?


why would or should it?

One can argue that PHP is the least suitable language to write web servers in, no standard async or multithreading, tones of leaks, and so on. Node asynchronous model is quirky but given js world's "isomorphism" it should have squeezed PHP out of existence by now

> One can argue that PHP is the least suitable language to write web servers in

Of course, because you don't write a web server in PHP. You put it behind nginx or apache and those can fork off PHP processes (or you use FPM, or etc.) to do everything for you.

Oh sure, and the only reason one would do this is to keep Aws bills up to Amazon shareholders expectations

PHP is roughly the hardest language to experience the effects of memory leaks in because of it's one process per request model.

You don't write web servers in JS, either. You use Node.

Because there aren't a bajillion turnkey Node servers available that require zero actual technical knowledge beyond an FTP client (and maybe not even that).

Why haven’t electric cars replaced gasoline?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact