Hacker News new | past | comments | ask | show | jobs | submit login
JIT-Less V8 (v8.dev)
337 points by bpasero 5 days ago | hide | past | web | favorite | 116 comments





For very security sensitive embedded applications, this could be a huge boon, since it reduces attack area surface, both from the point of view of executable pages, to the simplicity of the interpreter vs full JIT. Granted, there are many JS interpreters already available, like Ducktape, that fulfill the same benefits, but the immediate upside of this is compatibility with the full Node/ES6+ ecosystem and Chrome Dev Tools.

I have to say, Ducktape looks like it might have superior resource usage for very low memory situations.


> I have to say, Ducktape looks like it might have superior resource usage for very low memory situations.

Don't forget Moddable's XS, which does more with even less and just shipped ECMAScript 2019 (ES10) support.

http://blog.moddable.com/blog/es2019/


ChakraCore also has the ability to disable JIT. And we have a build configuration that compiles out the JIT so the binary is smaller as well.

I believe people in the community also have a ChakraCore fork specifically made for using Node on iOS.


Definitely agree duktape is a decent attempt at low memory situation, but one additional point is that duktape is very slow compared to modern JavaScript engines.

Hence I wonder if we can split ignition off v8 to create a standalone fast JavaScript interpreter at the cost of (possibly) more memory consumptions than duktape, that could prove to be useful in many scenarios.


Ignition is very tightly coupled to the rest of V8, starting with that fact that it uses inline caches and the object model to maintain performance, and finishing with it itself being written in "CSA", which is an assembler DSL that is passed through the TurboFan (optimizing compiler) backend to generate the machine code for the bytecode handlers (this has the interesting side-effect that porting V8 to a new platform requires porting the optimizing compiler). There's not really much that can be split off.

Thanks for the explanation! One follow-up question is: how can we still call ignition an interpreter with this flow where TurboFan is still used to generate machine code? Doesn't that defunct the idea of an interpreter in v8, which is to be used in platforms w/o write access to executable memory, such as iOS or PS4?

While bytecode handlers (and other builtins) are generated by TurboFan, this happens at V8-compile-time, not at runtime. Their generated code is shipped embedded into the binary as embedded builtins.

This suggests that a specialized app (such as a set-top box, smart TV, or game console) could push more code through the pre-JIT process to further close the performance gap. (This is interesting to me, because I haven’t seen much interest in pre-JIT compilation since the early days of Java, HotSpot, etc.)

“Very slow compared to” is still as fast, or faster than, many other dynamic language runtimes. In my case I saw a 2x-10x difference between V8 and Duktape, which is acceptable given the trade-offs.

Agreed, and I'm not saying duktape doesn't have a use case, I'm merely saying having a standalone ignition interpreter might enable different use cases.

I quite like Duktape because it is really simple to embed as well. The V8 API is comparitively pretty complicated.

I'd like to check that out. Unfortunately, like other projects that idiotically adopt a generic name that can't be Googled without returning an endless swamp of irrelevant results, I can't seem to find any info on "duct tape" as it relates to JavaScript interpreters. Any good pointers?

The use case in question: a few years ago I embedded V8 in a Win32 app with a single .DLL and header file, but V8 has exploded in complexity since then and no longer appears suitable for lightweight applications, meaning anything smaller than a full-fledged Web browser. I need to upgrade that interpreter at some point, so I'm definitely in the market for something comparable to what V8 used to be.

Kind of unfortunate, but I'm hardly the user the V8 team has in mind.


For anyone that is interested, duktape ( https://www.duktape.org/ ).

The API docs are at https://www.duktape.org/api.html , it's a C API and it's pretty easy to work with and bind native code. I kinda wish that more JIT compiled languages had an API as nice as LuaJIT.


Thanks, that was the trick I was missing. Will have a look.

Google duktape, not duct tape.

But is the increasing complexity (an interpreter added to the codebase) not introducing also another - new - attack vector?

IMHO, interpreters are simpler and easier to understand than optimizing JITs. Just look at the recent V8 range-check elimination bug around WASM optimizing +0 and -0 differently in a Math related intrinsic.

The interpreter has always been part of the codebase since it is used during startup and while the code is still being JITed

Not always, actually. V8 came out in 2008 and didn't have an interpreter until 2016. For the first eight years, the non-optimization execution was also a JIT, just a simpler one.

The information I was looking for, what kind of interpreter they mean, was in an Ignition article linked from the main article: https://v8.dev/blog/ignition-interpreter

"With Ignition, V8 compiles JavaScript functions to a concise bytecode, which is between 50% to 25% the size of the equivalent baseline machine code. This bytecode is then executed by a high-performance interpreter which yields execution speeds on real-world websites close to those of code generated by V8’s existing baseline compiler."


Note that the "existing" baseline compiler is now removed, and entirely replaced by the interpreter.

Apple requires browsers on iOS to use the same rendering engine as Safari, but would it allow a browser to use a different JavaScript engine? Or is this a loophole that Apple didn't forsee that they will be closing in the future? Are the rendering engine and JavaScript engine so separated that you could use the Safari rendering engine, but a different JS engine?

There's a software-level rule that forbids JIT engines by never allowing apps to mark pages as executable. According to this rule, you could run Chrome on iOS now. But there's also a policy level rule:

> 4.7 HTML5 Games, Bots, etc.

> Apps may contain or run code that is not embedded in the binary (e.g. HTML5-based games, bots, etc.), as long as [...] the software [...] only uses capabilities available in a standard WebKit view (e.g. it must open and run natively in Safari without modifications or additional software); your app must use WebKit and JavaScript Core to run third party software and should not attempt to extend or expose native platform APIs to third party software

https://developer.apple.com/app-store/review/guidelines/


That wording is kinda fuzzy about the distinction between the old UIWebView and the new WKWebView, which are very different.

Note that you can use either UIWebView and JavaScriptCore in your own app, which doesn't JIT, --OR-- WKWebView and its JavaScript interpreter, which does have JIT enabled, but runs in a separate process (not unlike Microsoft OLE out-of-process servers). Apple allows their own trusted apps to JIT (i.e. Safari, which is the same as the WKWebView engine that runs in a separate process). But UIWebView with JavaScriptCode that runs in your app are not allowed to JIT.

You can extend the JavaScriptCore interpreter used by UIWebView (which you can also use standalone without a UIWebView) with your own native Objective C code, that it can call directly via a JavaScript/Objective C bridge. (See NativeScript for example.) But that's impossible to do with WKWebView, whose JavaScriptCore (or whatever it is -- I'm not sure it's the same framework but it might be), because it runs in a different process. All you can do is to send messages (like JSON events or whatever) via IPC over Mach ports, not call your own code directly.

https://www.nativescript.org/


You could use the XCode linker to embed your JS/WASM code in the binary as a read-only resource/section, and then run it using JIT-less V8. Interpreted code is generally more compact than a native ISA like ARM64, so this could be useful in order to reduce app size.

> You could use the XCode linker to embed your JS/WASM code in the binary as a read-only resource/section, and then run it using JIT-less V8.

Or just read it from a file?


It needs to be embedded in the binary, because code that is not so embedded has different rules applied to it. So, it has to be done either in the compiler (e.g. as compile-time const arrays, which will then be part of the rodata section) or in the linker; the latter is arguably a bit easier.

"The binary" in this context means the ipa (archive) that you send to Apple.

They are making a distinction between downloaded data and data included in the archive, but using sloppy terminology.


But this isn't code (at least, to the operating system)?

That's a lot like what Unity3D used to do with Mono CLR byte code on iOS. But now it uses IL2CPP to compile it into C++.

Also, to use the same engine for all platforms

2.5.6 Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript

They thought of the loophole already


No, the two are deeply linked and you don't have the kind of access you need to replace it.

If it were allowed, you could use V8 for other, non-browser apps.

I think that should be possible. The main restriction seems to be

2.5.6 Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript.

Using V8 to execute JS without 'browsing the web' sounds okay.

And perhaps also this, although it is not clear to me whether downloaded JS scripts are considered 'code':

2.5.2 Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app, including other apps. [...]

https://developer.apple.com/app-store/review/guidelines/

Anything else? I'm curious to learn more..


As far as I know Apple doesn't require browsers to use the same rendering engine as Safari. It's just that so far any rendering engine + JS engine required a JIT to make it performant. And because a JIT is not allowed because apps are not allowed to write to executable memory. That's why the 3rd party browsers all used the same rendering engine as Safari.

The app store rules disallow other rendering engines: 2.5.6 Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript

Before WKWebView, even the Safari-based UIWebView didn't JIT JavaScript. Chrome for iOS was released years before WKWebView was added, so a lack of JIT in their own JavaScript engine would not have made a difference.


What about Proper Tail Calls?

Duktape and XS support them. JSC has had them for years now too. That's a big feature to accidentally miss. You switch and then inadvertently blow your stack every now and then because v8 decided to remove their already implemented tail calls for no good reason (Lest we go down the road again, the "alternative syntax" proposal was dropped, so there's zero excuses aside from a deliberate violation of the spec).


For a team that has been pushing the cutting edge of Javascript VM performance for many years, it must feel pretty weird to ship a feature that allows one to willingly regress performance so much!

A fast V8 with jitting isn't going away. Jitless V8 is meant for embedders that either cannot or do not want to allocate executable memory at runtime.

(Also, in many common real-world workloads the performance regression is minimal.)


I was worried my original comment might be misunderstood, that's why I qualified it as "... allows one to >>willingly<< regress performance...". Looks like it still got misunderstood anyway ;)

I believe this was also (at least partially) done for performance reasons!

In general, compared to a JIT, an interpreter is faster to start executing code, and can more quickly (and efficiently!) execute code that will only run once.

V8's "Ignition" started as a way to replace the "baseline" JIT in their engine. It can begin executing code while the optimizing compiler gets up to speed and analyzes what needs to be optimized and it can execute code that is extremely likely to only run once (like top level javascript).

The bytecode representation they use for Ignition is also used by their optimizing compiler "TurboFan", which means that they throw away the actual source code after it's been converted to bytecode, saving quite a lot of memory!

All together this means that the Ignition+TurboFan pipeline is faster to start executing, has lower resource usage, and is much simpler than the old stack of a "baseline" JIT (full-codegen) and their old optimizing compiler (crankshaft).

Being able to disable the optimizing JIT entirely is just another bonus of the architecture!


I'm curious if the runtime flag to enable JITless mode could also be enabled at compile time, removing the JIT compiler from the binary entirely. That could be really useful for projects where memory comes at a premium (and performance is not a major concern), like micropython but for JavaScript.

I assume this also doesn't support WASM when the JIT is disabled (or rather, when you can't write to executable memory), but if it did it could be a neat way to write decently performant software for tiny systems with just some JavaScript "glue".


> I'm curious if the runtime flag to enable JITless mode could also be enabled at compile time, removing the JIT compiler from the binary entirely. That could be really useful for projects where memory comes at a premium (and performance is not a major concern), like micropython but for JavaScript.

Theoretically yes, but this is not implemented. It should not be too hard to drastically reduce binary size with a build-time flag.

> I assume this also doesn't support WASM when the JIT is disabled (or rather, when you can't write to executable memory), but if it did it could be a neat way to write decently performant software for tiny systems with just some JavaScript "glue".

Correct, wasm is currently unsupported. Interpreted wasm is possible in the future, but would likely be very slow.


> It should not be too hard to drastically reduce binary size with a build-time flag.

And then how portable would the code be? Would this be a path to running node on CPUs without JIT support? Or does it still have to mess with the calling convention at an assembly level?


According to another comment in this thread [1] the interpreter is actually generated by the JIT at compile time so no, this wouldn't let you run V8 on a CPU that isn't currently supported.

[1] https://news.ycombinator.com/item?id=19379305


In JSC it's guarded by compile time, runtime, and iOS OS enforcement.

JIT-less should really be the default on the web. The security implications of RWX memory are just so bad, and the amount of time that an exotic JIT meaningfully improves behavior of real world web browsing (as opposed to JavaScript benchmarks) is limited. For the rare web app where a JIT is critical, a simple "Do you really trust this web page to perform a lot of computation?" dialog would mitigate a lot of zero-click/one-click attacks.

V8 already employs W^X, i.e. memory pages allocated for V8's heap are either writable or executable, but not both at the same time.

By allowing JIT at all, a small ROP chain can call VirtualProtect to make a larger payload executable.

Sure you can do everything with ROP, but it is less convenient (and Intel CET might eventually make ROP attacks actually hard).


Well, except for WebAssembly. But even then, it's still fundamentally possible to hijack control of whatever changes the pages from RW to RX.

> The security implications of RWX memory are just so bad

Such as? Any practical examples here?

Code executions is code execution. RWX just lets you execute faster code, it doesn't give you any privileges or permissions you didn't otherwise already have.



Which didn't need RWX by using ROP chains instead...?

The security vulnerability there was that the process had the ability to invoke shell at all, not how they got to invoking shell. In-process sandboxing isn't a thing anymore, spectre proved that. In that context what risk does RWX actually pose?


Anyone know how common attacks that take advantage of the JIT technology actually are?

It's been used consistently to get initial code execution on the PlayStation 4, iOS (for attacks involving just following a web link), and probably used pretty consistently other nation-state attacks but I have no real data to back this up.

The Pegasus spyware for instance utilized a JIT attack in JavaScriptCore in Safari for the initial stage.


the RWX memory in JSC has frequently been used as the start of full remote code execution, but has become progressively harder to abuse over the years (via W^X and in newer hardware PAC).

As mentioned in the article, this is interesting for game developers that aren't allowed to run unsigned code (eg: JIT).

JavaScript is very popular the programming zietgiest and likely to be a language non-programmers are exposed to via the web. Part of me wonders if game engines would take to integrating it instead of Lua if designers might be more familiar with it.


THIS is another reason to complain to EU regulators [1], regarding Apple's unfair trade practices. Never before in the history of computing, has a company so blatantly suppressed the competition and gotten away with murder. V8 should not be the one needing re-architecture to meet anti-competitive iOS App Store rules, the rules need to make common sense, and treat the competition fairly.

[1]: https://techcrunch.com/2019/03/13/spotify-files-a-complaint-...


Wrong thread – this should have been posted on https://news.ycombinator.com/item?id=19377322.

How bad of an idea would it be to create a ROP-based JIT engine for these platforms? You could hand-craft the gadgets and use the stack to reduce interpreter dispatch overhead.

That's called "Subroutine threading"! :-) https://en.wikipedia.org/wiki/Threaded_code#Subroutine_threa...

V8's Ignition interpreter is implemented with "Direct threading", which is quite similar but (probably?) faster on modern processors-- it does an indirect jump to the next bytecode handler instead of a return: https://news.ycombinator.com/item?id=10034167

"The bytecode handlers are not intended to be called directly, instead each bytecode handler dispatches to the next bytecode. Bytecode dispatch is implemented as a tail call operation in TurboFan. The interpreter loads the next bytecode, indexes into the dispatch table to get the code object of the target bytecode handler, and then tail calls the code object to dispatch to the next bytecode handler."


"V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++." from its home page. I didn't immediately know

I honestly don't blame you for not knowing, but there's a certain amount of assumed knowledge for readers of this site.

And?

A common (and, in my opinion, legitimate) complaint about blog posts that appear here often is that the company publishing the post doesn't say what it is they are or are doing, and as a result, why anyone should care about the about the blog post.

Wow, this is pretty neat.

We might be able to finally run more safe cryptography in the browser with constant-time guarantees (there are other concerns with browser-based crypto though).


I'd like to try this on the desktop. The difference in memory usage is probably an order of magnitude. That could make my 2GB laptop usable again. I doubt I'll see the difference on any site I care about (no facebook for example). I remember when Java could make the browser totally unusable for several minutes. An interpreter would have avoided that.

From the article:

> Memory consumption only changed slightly, with a median of 1.7% decrease of V8’s heap size for loading a representative set of websites.

What makes you believe is should be anything significant? After all the JIT-compiled code cannot be that large.


JIT code for JS is /huge/ at the lower optimization levels, dramatically larger than an interpreter's byte code by something in the order of 10x - many megs of code are generated by relatively small amounts of JS code.

The "without allocating executable memory at runtime" means "without using allocating pages that are marked as executable," not "without allocating memory."

It still runs the bloated JavaScript programs, just slower

Java did use an interpreter - it didn't get a JIT compiler until version 1.2. Java is slow for many other reasons as well.

the difference in memory usage is 1.7%, according to the article.

This means react native can use V8 on iOS? Does this have any consequences on performance or similar?

Both the rationale and some first-level benchmarks are given in the article.

I think JavascriptCore is more performant than V8. So, it is not necessary.

That is not yet clear. Early adopters have reported (jitless) V8 to be at least as fast as (jitless) JSC on Octane2 on a native iOS device.

Yes. Says so in blog post.

Does this avoid some of the issues with Spectre et al?

nope - unrelated security concerns.

Just another indication of Google trying to take over the world. There's a class of machines where Chrome can't run? We need to fix that, stat!

The tone may be flippant, but I was serious. Google's M.O. is to expand their reach into as many corners of our lives as possible. Having devices that can't run Javascript on Chrome is an impediment to that goal, and so I'm sure the marching orders were to find a way to make it work. It is already acknowledged that some upcoming work will be done to improve areas that are still too slow.

I'm skeptical of the claim of improved security. Theoretically, if there were some horrible bugs in the JIT, one could craft malicious input data causing the JIT to insert arbitrary code in the code heap. In practice, it doesn't seem possible. At least HotSpot has been JIT:ing code for decades and no one has been able to find such an exploit.

> Theoretically, if there were some horrible bugs in the JIT, one could craft malicious input data causing the JIT to insert arbitrary code in the code heap. In practice, it doesn't seem possible.

This is very possible: just do a search for ${your favorite JIT} arbitrary code execution, and you'll almost certainly see a real-world vulnerability.

> At least HotSpot has been JIT:ing code for decades and no one has been able to find such an exploit.

Yeah, no. See for example https://www.syscan360.org/slides/2013_EN_ExploitYourJavaNati...


Most of the time they're not bugs in the JIT - they're bugs in other parts of the software, basically your path to exploit is:

1. Find bug that gives you arbitrary read

2. and a bug that lets you write to some arbitrary location

3. Find bug the lets you jump to some location

4. Use [1] to find the location of the RWX region

5. use [2] to copy your exploit code into [4]

6. use [3] to jump to [4]

7. Profit

Often times a single use after free gives you 1, 2, and 3. Essentially you use the UaF to get multiple different objects pointing to the same place, but as different types. e.g You get a JS function allocated over the top of a typed array's backing store, then from JS you have an object that the runtime thinks is a typed array, but the pointer to its backing store is actually pointing to part of the RWX heap. Then all you have to do is copy your shell code into the corrupted typed array, and call the function object.

(This requires a GC related use after free, and most of the JS runtimes have gotten progressively more aggressive about validating the heap metadata, but fundamentally if there's a GC bug it's mostly likely just a matter of how much work will be needed to exploit it)


But the feature the exploit takes advantage of isn't just in time compilation, it is compilation! An ahead of time java compiler would have suffered from the exact same problem. In fact, any language compiling to machine code would be just as vulnerable.

Yes, but when pre-compiling, you implicitly trust the code. JITs like V8 are used to execute arbitrary code on your device, where such an exploit is much more harmful.

Untrue. Dart code for example is AOT compiled but untrusted. Various Javascript implementations are also AOT but also supposed to be used for untrusted code.

I have no idea what you're talking about. Under what circumstances is Dart AOT compiled and run untrusted? No browsers support Dart as a first-class citizen. If you're talking about compiling Dart into JS, that's obviously not what anyone is talking about.

There are no Ecmascript AOT compilers. By definition, Ecmascript must be run with an interpreter. AFAIK with the dynamic complexity of the language it's impossible to AOT compile even without things like `eval` and `new Function`.

A better example would be NaCl which as I understand it runs native machine code in a sandbox.


What I'm talking about? "Yes, but when pre-compiling, you implicitly trust the code." Citation needed. It's not true at all.

Can you give one example of a case where code is compiled AOT and not trusted during compile time? (Your example of Dart was challenged, and I agree that it is not an example, so a more detailed explanation of why it is an example would count.)

Why on earth wouldn't Dart count? It is AOT-compiled and meant to be run untrusted inside a Dart VM inside a web browser. That was the intention of the project even if it was cancelled and the VM deprecated. For more examples, see ActionScript on iOS, TFA itself or any of the myriad of projects trying ti AOT-compile JavaScript. For example https://link.springer.com/article/10.1134/S036176881701008X

As others have already noted, JIT codegen bugs leading to exploits do indeed happen and your intuition is mistaken. Here's one from Firefox's JS JIT from just a few months ago: https://bugzilla.mozilla.org/show_bug.cgi?id=1493900

>In practice, it doesn't seem possible. At least HotSpot has been JIT:ing code for decades and no one has been able to find such an exploit.

Tons of JITs have had exploits...

https://en.wikipedia.org/wiki/JIT_spraying


Security bugs in HotSpot happen can and do happen. Check out the CVE list for the JRE: https://www.cvedetails.com/vulnerability-list.php?vendor_id=...

Meltdown and Spectre are only possible in the browser because the JIT allows you to write JavaScript that you know is then compiled (JITed) into a very tight assembly loop. Same for Rowhammer.

That's not really a good argument. Taken to the extreme that'd be saying that JS engines should intentionally be slow in order to be "secure". JIT the code then inject a bunch of nop loops everywhere - you'd still be "preventing" meltdown, spectre, and rowhammer, but waste less power doing it.

JS engines are already intentionally slow. Look at all the hacks done to avoid random JS getting its hands on a precise timing source. Last time they straight up did away with SharedArrayBuffer.

This is all done to keep JIT around despite the obvious and massive security impact.


You can read more about the use cases and targeted user here: https://goo.gl/kRnhVe

They mention Cobalt (to allow targeting playstation), react native, nativescript, pdfium, and chrome's proxy resolver.


If I recall correctly every browser security bug in recent years has used the RWX blocks to get full RCE. If a process is not able to ever get RWX memory your code has to be entirely ROP/JOP based which is a much higher barrier.

JIT spraying for ASLR defeat is a thing https://en.m.wikipedia.org/wiki/JIT_spraying

Aside from the other replies, I'm baffled as to how you decided that Hotspot being safe would mean V8 is safe.

I keep wondering what the deal is with JS on the backend? Why does everyone love it so much? Let's not forget the 10-day design, dynamic typing (and weakly typed, compared with python), slowness, null vs. undefined, etc. JS is a scripting language; they're supposed to be for controlling the behavior of applications (i.e. browsers), not writing applications in and of themselves. Not to mention the dependency hell, NPM insecurity, etc. I see the purpose for limited use in websites, but definitely not the PWA or backend stuff.

Can someone who uses it happily on the backend talk about why they like it and why it's good?


It sounds like you read a lot of articles about why JS is bad, but don't have a lot of experience with it.

* 10-day design? Nobody is using Javascript 1.0 anymore.

* Typing is available to various extents thanks to Flow and/or Typescript.

* Slowness... I don't know what you're referring to. Javascript isn't slow.

* null vs undefined. What about them? They are two different things with different meanings.

* Dependency hell. I assume you refer to the many small modules on NPM with dependencies on other modules. Not sure what the problem here is per se. Avoid them if you don't like dependencies.

* NPM insecurity - what?

I like JS on the backend because it's a nice, flexible language to work with, with a healthy and cheap (cost-wise) ecosystem. I get a lot of stuff done very quickly, and I can run my stuff pretty much everywhere.


>NPM insecurity

NPM packages can contain malicious code. There's no NPM review process, and you can't point to specific versions to lock in your own reviews (package administrators can change whatever files they'd like). There's no such thing as a verified-safe dependencies list because the file you reviewed last month might not be downloaded today.


You definitely can point to specific versions.

That being said, most package managers can contain malicious code and very few of them actually review their packages.

Besides, any company using NPM seriously probably has their own proxy in front of it, so there's no case of "the file might not be downloadable anymore" if that was already a problem with NPM itself.


>and you can't point to specific versions

Yes you can.

>no such thing as a verified-safe dependencies list

There are audits.


NPM insecurity: lots of malicious packages discovered. Also, left-pad.

Slowness: Yes, it is. Look at all the benchmarks comparing it to, say, go.


People were using Ruby and Python (and before that Perl and PHP) on the backend in many cases. I think it's likely that those are the kinds of projects which are using JS on the backend now, while the Java and C# people are continuing to do their backends in Java and C#.

One of JavaScript's big advantages over Ruby and Python was performance, both because the standard runtime is faster (JITing JavaScript turned out to be a lot easier than JITing Ruby and Python) and because its fundamentally asynchronous nature was a better match for webservers.

And although NPM sucks in a number of ways, I've always found it easier to use than Python's dependency management.


Anecdotal, but as developer who has done mainly C# for the last decade-and-a-half, I've switched to Node.js for -a lot- of my non-enterprisey work.

> JITing JavaScript turned out to be a lot easier than JITing Ruby and Python

Why was this the case?


I suspect that a lot of the impediment to JITing Ruby and Python effectively was due to large and important libraries (standard and third-party) being written in C using interfaces that were not JIT-friendly. JavaScript in the browser also depends on C/C++ interfaces to important functionality, but the entire stack for each engine was controlled by a single organization, and was always released as a unit (the web browser). I think this largely eliminated library inertia (or whatever you want to call it) as a problem when replacing a JavaScript engine.

The other advantage JavaScript had was that there were large, well-funded organizations (notably Google and Mozilla) competing on performance. Python and Ruby were always community projects and they had a strong emphasis on maintaining backwards compatibility with a large and diverse ecosystem, and there wasn't a lot of demand for faster implementations.


JavaScript is a relatively small and simple language with relatively constrained semantics that allow it to be implemented relatively simply.

Ruby and Python are the opposite of that. Ruby and Python are I'd say literally a thousand times more complicated to compile than JavaScript.


I don't like JS very much so I wouldn't want to use it anywhere it could be avoided, that being said it's pretty clear why some people think otherwise, regardless of the qualities and defects of the language.

Using JS in the backend means that you don't have to learn a new language if you're already familiar with it in the browser. Given that webdev is extremely popular with new developers these days, it's not surprising that they might want to reuse the technology when they have to write backend code instead of learning a whole new language. Similarly companies can reuse their pools of webdevs to write non-web applications instead of hiring new personnel or having to retrain the existing coders.

It also means that you can reuse code from the browser in the backend.

Sure if you find JS a clunky and subpar language it might be disappointing to see it spread the way it does but hey, at least it's not PHP!


How is it slow? [Citation needed]

The async model is easy to use so you get good performance before even optimize it. It comes out of the box with good json serialization/parsing, so that’s one less dependency. Not really sure where you’re coming from.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: