Hacker News new | past | comments | ask | show | jobs | submit login
Faster JavaScript Calls (v8.dev)
443 points by feross 54 days ago | hide | past | favorite | 126 comments



I've brought this up before, but I wonder if a "JS constraint spec" could be added to the JS standard that guarantees against this sort of (already uncommon) dynamic behavior in certain cases, so that code can be more aggressively optimized when possible

If you're already using TypeScript to constrain your code, why couldn't it generate interpreter/compiler hints?

These could take the form of a language-agnostic metadata file, similar to source maps, that optionally ships alongside the JS bundle

Edit: Several people have misunderstood, so I must not have articulated this well. What I meant was not a new/stricter subset of JS that forbids certain dynamic behaviors across the board. What I'm talking about is having TypeScript (or otherwise), which knows things at compile-time about how specific pieces of code are and are not called, manifest this information in a way that V8 can digest and act on directly, for that specific codebase. V8 already tries to guess this information and uses it to decide which things to optimize and how, but it's treating the JS bundle as a black box even though in many cases, an earlier stage of the pipeline already had this information on-hand and threw it away.

In addition to being more granular, this (like TypeScript itself) would allow parts of a codebase to continue being fully dynamic while other parts are well-constrained.


The V8 team experimented with "Strong Mode" which was approximately this - a fast subset of JS [0]. The experiment was ended [1]

[0] https://docs.google.com/document/d/1Qk0qC4s_XNCLemj42FqfsRLp...

[1] https://groups.google.com/g/strengthen-js/c/ojj3TDxbHpQ/m/5E...


From a quick glance this looks like a further subsetting a la strict mode, which isn't really what I meant

I clarified in another thread, but what I meant to convey is a way of giving very specific hints about specific pieces of one codebase:

"Function X will only ever take two arguments, they will be of types Y and Z"

"This object will only ever have these properties; it will never have properties created or deleted"

"This prototype will never be modified at runtime"

This could guarantee ahead-of-time that certain optimizations - packing an object as a struct instead of a hashmap, for example - can be done, so that V8 doesn't have to spend time speculating about whether or not to JIT something


The problem is less about the annotations (you can already infer most of this from a single pass over the code) than it is about the calls. If you have a reference to a function, you need that data handy. Unless you're copying it around with your function reference, you don't have that data (and it's prohibitively expensive to try).

JS is heavy on function references (closures are one of the most popular JS idioms), so it's not easy to know at call time how you can optimize your code.


>"This object will only ever have these properties; it will never have properties created or deleted"

>"This prototype will never be modified at runtime"

You can already give these hints with Object.freeze/Object.seal

Not really hints though, they enforce a certain behavior. Because hints aren't enough. A JS engine may already assume you'll never change the objects, but it still has to support cases where you then do so anyways.

I don't think these alone would enable any substantial speedups, since the performance issues arise where unknown objects are used.


> You can already give these hints with Object.freeze/Object.seal

You can but my understanding is the performance tradeoffs aren't great unless you have a workload that especially benefits from it. Those calls are expensive.


OK let's say you have f, it only takes arguments a:Y, and b:Z.

I call f(Z, Y). What happens? Does the machine segfault? Probably not.... so you're in "check the types at the callsite" territory. And honestly those kind of boundary checks are the problem, so to speak.

Are we doing static analysis of the entire codebase (a sort of closed-world theory?) You could maybe do something like this for non-exported functions, though at the V8 level that's tough.

I think that there would be possibilities to somehow get systems to be set up in a way to do more cleverness along these lines, but so long as you're taking in stuff from the outside world you end up needing to set up relatively costly checks, relative to the cost of the "default" JS semantics that you're trying to avoid paying.


Isn't that what asm.js did?


Maybe from a certain perspective. But it worked by conforming the bundle to a particular subset of JS that the authors knew to be optimized, instead of expressing information to the interpreter outright. That seems like a comparatively limited channel for communication.


From what I understand, in Safari at least they tried to make many of the optimizations general. So if you use asm.js-style type indications in your code even without following the full spec, you might see some performance benefit.

I have sometimes found a speedup when adding apparently an superfluous |0.


The highly optimizable, explicit and precise code for the interpreter is WASM.

JS VMs don't need more hints from the code. They need guarantees. They already analyze and profile JS, but as long as JS is allowed to be dynamic (and it has to, it won't be JS without it), then they have to keep the complexity and cost of all the extra runtime checks and possible deoptimizations.


AFAICT the v8 JIT collects such information form code's behavior, and uses it to generate efficient machine code.

If v8 could accept Typescript directly, it probably could pass this kind of information to the JIT directly, too. But the input language is ES6 or something like it, and it has to follow the calling conventions of it. I'm afraid that adorning ES6 with this information would be either brittle or backwards-incompatible.


> I'm afraid that adorning ES6 with this information would be either brittle or backwards-incompatible

Like I said above, it could be shipped as separate (optional) metadata files associated with source files, the same way that source maps already work


It can't trust that all the metadata is correct, otherwise security issues can happen. And if you need to do that, why not just gather / regenerate the same information at compile time.

Also, what do you do about code loading? e.g. scripts loaded from other files at runtime, or eval? Does it throw an error if a third-party script uses a function incorrectly? Or do we assume that metadata is local-use only?

There are a lot of things in JavaScript and also the "browser environment" (e.g. ads, third-party scripts) that can limit the utility of traditional compiler techniques.


There are already stepped levels of optimization going on in the JS engine; it goes to great lengths to reverse-engineer whether something can probably be optimized or not, and to handle all the edge cases where it needs to bail out into a less-efficient mode when some assumption is violated. All I'm talking about is a way to give it extra hints about what to eagerly optimize and how. All of that other robustness can stay in place.


Pretty much all that buys you is a small reduction in an already-small warm-up phase before the jit chooses a late-stage optimization (possibly with an increased cost for loading that phase, so even that small gain may be reduced). Only for code that uses this. And performs worse if it proves incorrect, as bail-out logic is generally a bit costly.

Browsers have experimented with hints quite a lot. Nearly all of them have been rolled back, since adding the general strategies to the jit is vastly more useful for everyone, and perform roughly as good or better since they identify the optimization everywhere, rather than only in annotated code.

---

The only ones I'd call "successful" so far have been 1) `"use strict";` which is not really an optimization hint, as it opts you into additional errors, and 2) asm.js, which has been rolled back as browsers now just recognize the patterns.

asm.js did at least have a very impressive run for a while... but it's potentially even more at risk of disappearing completely than many, since it's rather strictly a compiler target and not something humans write. wasm may end up devouring it entirely, and it could happen rather quickly since asm.js degrades to "it's just javascript" and it continues working if you drop all the special optimization logic (which is to its credit - it has a sane exit strategy!)


Missed that, thanks. That could indeed work. It could even nicely dovetail into the Typescript (or Reason, or Elm) compilation process.


This may be a bit of a reach, but there's AssemblyScript [1], which is a compiler from a strict variant of TypeScript to WebAssembly. Since technically V8 is also a WebAssembly engine, porting some part of your codebase from JS/TS to AssemblyScript could improve your performance.

[1]: https://www.assemblyscript.org/


Does V8 actually run compiler optimizations on WebAssembly? I thought it came pre-optimized and it just executed it.

My impression of WASM was it's ok but it's not very expressive for an assembly language - at least it has integers though. gcc.godbolt.org doesn't give you any library functions for it, even memcpy(), so I couldn't do a lot of testing.


I think it does. Here's an article introducing Liftoff [1] with the relevant parts:

> With Liftoff and TurboFan, V8 now has two compilation tiers for WebAssembly: Liftoff as the baseline compiler for fast startup and TurboFan as optimizing compiler for maximum performance.

> Immediately after Liftoff compilation of a module finished, the WebAssembly engine starts background threads to generate optimized code for the module.

[1]: https://v8.dev/blog/liftoff


> already uncommon

It's definitely not uncommon though. Just to provide one example: how often do you use all the arguments to `.map` callback?

Usually you'd write something like `arr.map(x => x + 1)`.

However, to spell out all the arguments you'd have to write `arr.map((x, i, arr) => x + 1)`.

And there's tons of other functions like that in stdlib or 3rd-party libs that people use without passing or accepting all the args. That's why this optimisation is so useful.


This is exactly the kind of thing that TypeScript generated JS could help optimize. Because tsc can see that this `map` call is, in fact, Array.prototype.map, it can deduce that the lambda will get sent three values. It could generate dummy arguments for that in the JS output. It could just as easily generate a hint to the same effect in some constraint file as the GP suggests.


In general I think this is an interesting idea, but I feel like this has a lot of overlap with asm.js and WebAssembly.

With this standard we would have standard dynamic JavaScript as the world knows it today, a restricted subset ('constraint spec') that is still designed for human readability/writability, and then asm.js/WebAssembly, which would not be written directly but instead would be an output of code written in other languages. Programmers will want interoperability between all of these paths, and that is a lot of complexity for these engines to manage.


There's already "use strict" that does some of this to enable optimizations and other stuff. Maybe it should be even stricter or something similar could be used.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Right but I'm talking about very specific hints for one codebase

"Function X will only ever take two arguments, they will be of types Y and Z"

"This object will only ever have these properties; it will never have properties created or deleted"

"This prototype will never be modified at runtime"


It still has to verify that those assertions are true in practice. Any random person at the console can call a function and pass whatever they like.

The only real way to make that work would be a new `<script type="typed-module" />` variant that implemented a subset of JS in a typed environment.

My guess is that this could happen in the future, but everyone is watching Typescript as a testing ground to find exactly what works, what doesn't, and what works, but isn't desirable.


There's already a mechanism for bailing out when optimization assumptions are violated; they could be made more eagerly but retain the ability to degrade


> "This object will only ever have these properties; it will never have properties created or deleted"

Already in JavaScript: Object.seal(object)

> "This prototype will never be modified at runtime"

Already in JavaScript: Object.freeze(myPrototype)


Regarding your update: I think there is some interest in passing TS type annotations on to the JS VM/JIT, but my impression is still that TC39 is rather shy on the subject after the still not entirely healed scars of ES4.

I know I would love to see TS type annotations made syntactically valid if semantically ignored (ala Python's type annotations PEPs), simply to make sure valid TS runs in the browser without a build step to better unify the "subset". It wouldn't be a far shot from there to seeing at least some small type annotation hints support in JS engines. Though I definitely understand the concerns and wariness that anything done there would be very engine-specific and in danger of lock in for the web.

I heard there were some V8-specific prototype attempts from some of the Deno developers, but I don't recall how far those attempts got or if they had anything in the way of results.


I think this is what WASM is meant to achieve. It allows for much more precise semantics without needing to parse an AST.


By putting the number of arguments onto the stack, we end up making the stack larger for every single function call. We also require every access to any stack variable to do math to figure out where on the stack it's arguments are.

That seems an insane overhead for the common case... If this really does make benchmarks faster overall, it suggests an even better solution might exist out there that does not add overhead for the common case...

Perhaps it might be worth keeping track of how many arguments a function is typically called with, and recompiling code for each case? Then all the checks for argument counts can be entirely removed in the optimized code.


One of the things I've realized after working on SpiderMonkey for a few years is that simple solutions that accept a little bit of overhead are often faster than more complicated approaches that try to eke out every last cycle.

Math is fast. Stack space is rarely a key constraint. On the other hand, tracking typical argument counts for a function also requires overhead, and compiling multiple copies of a function is also not free. If a particular call site is hot enough that call overhead matters, it's probably hot enough that inlining is the correct answer.

As one additional data point: V8's new approach appears to be very similar to how SpiderMonkey's been implemented all along. Convergent evolution is generally a good sign.


YouTube fed me a video I’d watched a couple years ago about a log searching tool. Compressed data, decompressed into cpu cache, and then scanned. Almost no indexing at all, just cpu cache for speed.

We spend a lot of time arguing with the cpu. If we give it something it’s actually happy doing, it almost doesn’t matter how stupid that thing is, because it’s stupid fast.


Interesting. Would really appreciate if you could share the video with us. Thanks


This is one of the main selling points of Blosc https://blosc.org/pages/blosc-in-depth/


I couldn't find the talk hinted at, but this blog post seems to touch on some of the same ideas: https://www.humio.com/whats-new/blog/how-fast-can-you-grep/


Brute force search is how Scaylr does it. Here’s an article they wrote about their implementation.

https://www.scalyr.com/blog/searching-1tb-sec-systems-engine...


> Compressed data, decompressed into cpu cache, and then scanned. Almost no indexing at all, just cpu cache for speed.

Where are these programs not using the CPU cache?


Indexes and tree structures involve pointer walking. Nothing is even guaranteed to be in main memory, nevermind L2 cache. These guys apparently went straight from disk (linear reads) to L1/L2.


assume they meant decompress small enough a block to fit the CPU cache, and do in-memory scans.


> We also require every access to any stack variable to do math to figure out where on the stack it's arguments are.

In the article, it appears to be the opposite. Previously this math was required (`[ai] = 2 + parameter_count - i - 1`), but by reversing the arguments in the stack it's now always a constant offset, and they prevent indexing out of the passed arguments in the frame by ensuring there's at least as many arguments as formal parameters by stuffing the call with extra `undefined`s:

> But what happens if we reverse the arguments? Now the offset can be simply calculated as [ai] = 2 + i. We don’t need to know how many arguments are in the stack, but if we can guarantee that we'll always have at least the parameter count of arguments in the stack, then we can always use this scheme to calculate the offset.


>We also require every access to any stack variable to do math to figure out where on the stack it's arguments are.

I suspect that on modern architectures these trivial additions are effectively free most of the time. In my experience the hard part of optimizing for modern architectures is compensating for memory latency (cache optimization, prefetching) and pipeline flushes (branch prediction).

An inline, unconditional add is as cheap as it gets these days. A few additional bytes of stack that's (almost) always going to be super hot in cache is not really significant.

I suppose it could pose an issue for super deep function call tree where it would cause the stack to grow more than necessary, but given the usual memory overhead of JS I doubt that it's going to be very significant even then.


Is it better, or worse, than dumping the adapter object into the stack? Clearly, their numbers show it’s better. Also, runtime variable calling conventions are a nightmare. Telling the callee the calling convention (in some sort of mixed mode) still requires a flag ... like an integer? So, it seems they have to pay the cost of the integer no matter what.

Maybe I’m wrong — could you elaborate how you’d do it?


> Clearly, their numbers show it’s better.

Their numbers seem cherry picked or is Turbo Fan actually unable to inline code?


TF can inline, but if it runs that optimization then the benchmark would be turned into a constant, so it wouldn't be measuring the call overhead of the way arguments are passed.


Inlining is something I would think also eliminates stack frames. So a comparison between an optimization that gets rid of a complicated stack frame and a compiler that has inlining turned off isn't something I would consider ideal.


Stack space is virtually free and it’s just a few instructions to put the value on the stack. Regardless, this increased overhead is very small compared to the total overhead of making a function call in the first place. The place to eke out every last cycle is the tight loop, which should not include function calls anyway.


> Stack space is virtually free

On machines with 64 bit addressing. There are lots of things that don’t work so well on 32 bit or 16 bit OSes that we have to unlearn.

There have been a couple times where I thought about trying to catalog all of the things I think I know about a piece of hardware, software or even a single library, so that I can challenge my own assumptions when a major release comes out. Never have gotten around to it.


> On machines with 64 bit addressing. There are lots of things that don’t work so well on 32 bit or 16 bit OSes that we have to unlearn.

Stack space is virtually free on 32 bit systems as well. V8 is not designed for or portable to 16-bit or 8-bit systems, so while your point is interesting, it’s irrelevant here.


> Stack space is virtually free on 32 bit systems as well.

How do you figure that, given we’ve had quite a bit of time wrestling with 2G memory limitations and how those interact badly with multithreading?


> How do you figure that

Given 32 bit words you'll see 4 bytes per call for parameter counts. A call stack 100 calls deep will incur 400 extra bytes of stack space overhead; less than half a KiB. Along the way all extra 'arguments adaptor frames' are eliminated, so the net is even smaller.

That doesn't look like a problem. If you're 100 calls deep and you blow out the stack because it's 400 bytes larger you have other issues.


I'm not talking about word size on the stack. I'm talking about 2GB available for heap, plus reserved space for all stacks.

Most programming languages need the stack to be a linear set of addresses. So you figure out what the max reasonable stack size is for a given thread, and you pre-allocate that address space. Which means even if the stack stays shallow, you can't use those addresses for heap. If you increase the average case for your call stack size, then the available heap shrinks.

That's why I'm saying we are doing this now, even though in theory this knowledge has been available to us for a very long time. Nearly everything is 64 bit now, and even in the JVM where they carve 2 bits off for parallel mark and sweep, there are still oceans of extra address space for us to 'waste'.


> and you pre-allocate that address space

The increase in stack usage is highly unlikely to require more space than the default 'reasonable' stack size already being allocated for threads in 32b environments where one is likely to be operating v8; typically the defaults are already pretty generous. Unless the default thread stack size allocation actually has to be increased to accommodate the modest growth caused by the change then there is zero impact on available heap.


For general applications and as a matter of good practice, stack depth practically never gets deep enough to make a dent in 2GB.

If your stack is reaching a significant proportion of 2GB you are seriously doing something wrong and it’s likely other parts of your application will start breaking as well.


I think it's pretty clear that I'm saying "this has not been my experience."

It's not 'my stack'. It's all stacks. Each thread has a stack. They are all in your 1.2-2GB address space. If you're not using a Global Interpreter Lock'ed language, you could easily end up with hundreds of threads for production code. And the more you try to dedupe work, the more threads you are likely to end up with.


> “turns out that, with a clever trick, we can remove this extra frame, simplify the V8 codebase and get rid of almost the entire overhead.”

A lost opportunity to mention “with this one weird trick”


This sort of low level stuff is so interesting to me but I feel out of my depth having not looked at this sort of stuff since undergrad. In a hypothetical world where someone wanted to contribute to v8, what are some good resources to get up to speed?


If you mean JIT, then I'd recommend looking at PyPy, also other JS engines and Java HotSpot, to see what other techniques are out there, when they work, when don't and why.

Ultimately V8 is almost an ecosystem in itself, it's embedded into quite a few other software, the ECMA262 standard is continuously changing, the Web is changing, Chrome changes on top of all this, plus there is a lot of backward compatibility stuff.

You can look at the V8 bugtracker pick a task and try your luck.


There is a lot of content on V8's dev blog, with different depth, all pretty well written: https://v8.dev/blog


For sure the blog is great! But I’m thinking something more along the lines of “here’s how you’d build something like this” or “here’s the stuff to read to get started on a project like this”.


JIT engineers are mostly oddly specialized compiler engineers, so you're really looking at learning how optimizing compilers work as a prerequisite.


I’ve been coding JS for a decade. Nowadays I wish JS was simpler. I’m fine with ES6 syntactic sugar. I just want the language to do be stricter, safer and faster. I don’t want to transpile TS into JS. Just make JS strict with monomorphic types.

One thing that really impressed me is how crazy fast golang is. Not just at runtime but compile time too. Esbuild is like 20x faster than webpack. That’s nuts. I see a webserver running on 512MB ram and single core, serving 10s of thousands of requests a second without batting an eye. Just rock solid reliability and performance.

I want to build web apps with low memory, cpu, and bandwidth profile. Give us a simpler language that doesn’t need crazy complicated hotspot profiling JITs. Just make the default, simple thing insanely fast.

Give us better fundamental primitives that don’t need a gajillion build tools.

The JS ecosystem is so complicated nowadays. It just seems we’re patching shit on top of each other without reasoning from first principles.

Disclaimer: I am not saying this specific improvement isn't worth it. I love v8, but I feel we can make magnitudes of higher performance gains by removing things and simplifying such that you don't even need top level complex optimizations.


> I want to build web apps with low memory, cpu, and bandwidth profile. Give us a simpler language that doesn’t need crazy complicated hotspot profiling JITs. Just make the default, simple thing insanely fast.

What makes you think it isn't? What if I told you that people were creating Electron-style apps over a decade before Electron ever existed? Go use Firefox 0.8 or Netscape 6. Pretty much every part of the UI on your screen that you'll interact with is backed by JavaScript. Keep in mind that this was 15–20 years ago, pre-Windows Vista, where main memory wasn't measured in gigabytes. And that was all pre-JIT!

Here's a screenshot of the file listing for the version of AOL Instant Messenger that was built into Netscape Navigator: https://imgur.com/a/ztaMVon

And a shaky cellphone video of someone using it over 10 years later, before AOL shut down the network: https://www.youtube.com/watch?v=apyR0bPnFAo

> Esbuild is like 20x faster than webpack.

The problem is webpack and the development style favored by the NodeJS/NPM community. It's filled with bad practices, and they all do it because either no one knows better, or they don't care. JS is not inherently slow. It's exceptionally fast, even. Contrary to popular belief, though, how you write code still matters. The myth of the sufficiently smart compiler (including JITs) is that: a myth.


> Esbuild is like 20x faster than webpack.

Just serve your ESM modules directly from the filesystem. You don't need webpack or snowpack. Build in 0 seconds.

You also don't need a transpiler like Babel if you drop support for IE11. Almost all ES6 features– such as classes, proxies, arrow functions, and async are supported by modern browsers and their older versions. As long as you avoid features like static, private, and ??=, you get instant compile times.

It's not 2015. You don't need a transpiler to write modern JavaScript. As of writing, the latest version of Safari is 14 and the latest version of Chrome is 88. These features fully work in Safari 13 and Chrome 70, which have less than 1% of marketshare.

> The JS ecosystem is so complicated nowadays. It just seems we’re patching shit on top of each other without reasoning from first principles.

I agree. I don't use any third-party JavaScript dependencies, except for the occasional framework. You can rewrite many third-party libraries yourself and more specialized to your usecase quickly. Third-party JS libraries tend to have poor performance.


> I just want the language to do be stricter, safer and faster. I don’t want to transpile TS into JS. Just make JS strict with monomorphic types.

You can accomplish a little bit of this with an IDE that supports JSDoc syntax. It will get type hints from your JSDoc definitions and show warnings or errors when you violate those definitions.


I think what you want it wasm. You just need to wait for a simple language which compiles to it - or learn rust.


Very exciting!

This is super useful for Node.js - I've had PRs with features I didn't land because of this and I intend to investigate again e.g. https://github.com/nodejs/node/pull/35877


I find it amusing that they basically "rediscovered" the C printf() trick, AKA variadic function trick.


Being old... I read the article quickly, and had the same thought. Yes, C pushes arguments in reverse order... and we never cared if arguments were "under specified". Or over. Just needed to know where the first argument was, and how to get to the second, and how to return to the caller. The caller, of course, is then responsible for argument clean-up.

Made sense 50 years ago... makes sense now.

FredW


Lex Fridman just did a great podcast episode with Brendan Eich, credited with creating JavaScript. Lots of great little tidbits of history in the conversation - https://www.youtube.com/watch?v=krB0enBeSiE


Reminds me of cdecl vs stdcall with the argument ordering. Really interesting low level details here for sure.


I once proposed to also speed up and simplify asynchronous async/await function calls by removing the extra caching layer and state machine underlying all such calls that come from using promises:

https://es.discourse.group/t/callback-based-simplified-async...


Have you considered that generators are basically state machines?


Yes, but I meant not an additional state machine and caching layer. Promise based async/await also have the generators.


I read, TypeScript code is sometimes faster, because it tends to favor monomorphic functions.

Can I assume that V8 moved that gain to JS?


Even in TypeScript optional parameters are very common, so I don't really think it's a JS specific optimization.


If you want to maintain compatibility with existing JavaScript, you kinda have to support the complexity of variadic functions and multiple return types; otherwise FFI would be required to use JavaScript. It seems this a convenience comprise TypeScript makes.


"JavaScript allows calling a function with a different number of arguments than the expected number of parameters, i.e., one can pass fewer or more arguments than the declared formal parameters."

I'm relatively new to Javascript. I've been bitten by both of these recently. I wasted an hour where I added a third argument to a function but missed a place elsewhere that was sending only two parameters. That makes it harder to change your code. Now I read that it's not only a good way to introduce ugly bugs, but that this wonderful feature also makes your code run slower. Genius.


May I recommend adding a static type checker to your tooling. You can annotate function types with TypeScript using JSDoc comments[1], or Flow[2] and transpile the annotations away with babel.Then you would run the type checker as you would run your linter as part of your CI.

1. https://www.typescriptlang.org/docs/handbook/intro-to-js-ts....

2. https://flow.org/


> May I recommend adding a static type checker to your tooling.

I fully understand that the JS community has a thing for dependencies, and views Amazon as its representative use case, but I was hoping to add a few small Javascript functions to a static html page. JS loses its appeal quickly when you start adding things like that - if I can't open an html file in a text editor and add the functions I need, I might as well use a different language.


I'm not really sure what Amazon has to do with this, but the problem you described - "I changed the API and forgot to update a part of the application" - is exactly what type checking is for.

This is not a "JS community has a thing for..." bit, this applies to any programming language.


I'm not sure what other language you could use to add client-side functionality to a static webpage?

But anyway, if you want to avoid a compilation step, the link GP mentioned shows how TypeScript can check JS files annotated with normal comments. You can run it through the a compiler to remove the annotations if you want, but it's absolutely not required.


As someone who has used both Flow and Typescript for years, I'd recommend Typescript over Flow. You can use Babel to compile Typescript code just like you can with Flow and have Typescript do its type-checking in CI. Typescript has better editor integration, more type-definitions for libraries, and breaks compatibility much less often than Flow.


It has its issues, but the best use case is as a mechanism for optional parameters, especially when adding new parameters to existing functions while maintaining API compatibility, which is especially important for standard functions.

If you want to reduce the amount of mistakes made, in TypeScript accidental under-application becomes a type error. I feel the experience of writing TypeScript is a lot smoother than JS alone.


When you add a new optional parameter to a function, there is no guarantee that you won't break code that calls the function.

Here is one notorious example:

  [ '1', '1', '1', '1' ].map( s => parseInt(s) )
This returns the expected result:

  [ 1, 1, 1, 1 ]
Then you think, "I don't need that little wrapper, I can just pass parseInt as the callback directly":

  [ '1', '1', '1', '1' ].map( parseInt )
And this returns something that you may not expect:

  [ 1, NaN, 1, 1 ]
That happens because parseInt() takes an optional second parameter with the number base, and map() passes a second parameter with the array index. Oops!

A similar situation can arise when you add a new optional parameter. You still have to check all the call sites, and for a public API you won't be able to do that.


This is true, I mainly didn’t bother mentioning it since it is still a fairly obscure edge case (relative to just getting started in JS anyways,) and because TypeScript catches that too, most of the time.


Other languages have solved it using better methods.

1. Declare another function with same name and make the older function a wrapper with a default for new argument

2. Support for an explicit default argument

Either of these would have prevented GP from having their phone bug/problem


Presumably OP didn’t want to create a new function, or they would have. JavaScript supports default parameters, but again, presumably OP wanted the caller to supply all the arguments.

If you’re trying to change the signature of an existing function, I don’t think we’ve yet figured out a better safeguard than static typing.


It’s possible to use "arguments.length" to “change” the function signature and execute different code for different parameter counts. It’s a hack, but it works.


The first solution can be done in JavaScript and it is done sometimes, but other times it may be undesirable, especially for refactoring. The second solution was done; ECMAScript (since the 2015 edition) supports explicit default arguments.

When you add TypeScript into the mix, it can help a lot as it is often able to detect under- and over-application as type errors (though not always, due to intentional looseness in the language; TypeScript is not fully sound.)


You'll love the output of ['1', '7', '11'].map(parseInt)


One would assume it to be equivalent to (written in the most old style possible way so everybody can understand it)

  ['1', '7', '11'].map(function(n) {return parseInt(n);})
but no, map has got an extra argument, the index of the array

  function traceParseInt(n, i) {
    console.log(i);
    return parseInt(n, i);
  }
  ['1', '7', '11'].map(traceParseInt)
  0
  1
  2
  [ 1, NaN, 3 ]

Nice way to wait for an integration test to complete. Thanks.


And parseInt() also take an extra argument, the "base" of the number—hence the weird results.

(Which also means that a static type checker would not have caught this "bug".)


I've always found this example so frustrating. JavaScript isn't curried. Plenty of languages are not. `parseInt` is overloaded/variadic. What we know is if one does not read the documentation for `parseInt`, they might get unexpected results... but who's fault is that? I don't see this a a "hidden gotcha". Yes, if you came from say.... Pascal, and you were used to `StrToInt` taking one arg, and you refused to check into it, you'd get odd results. But who among us, when learning a new language don't look up things like, "how to convert a string to an integer in <insert name of new language>"? That search will most likely land you at some docs that explain that second arg.


Number() only takes a single parameter. The correct version is:

['1', '7', '11'].map(Number)


Careful with that: https://jakearchibald.com/2021/function-callback-risks/

Using `Number` is probably safe because it'd break much of the web of it were changed. Also note that it is not equivalent to parseInt, as Number and parseInt apply different parsing rules.


Thanks for linking that post. I meant to include it in my comment but wasn't able to find it again.


If only the language did anything at all to help you realize the wrong version is wrong and that this is somehow different.


> I'm relatively new to Javascript. I've been bitten by both of these recently. I wasted an hour where I added a third argument to a function but missed a place elsewhere that was sending only two parameters. That makes it harder to change your code.

How? Sure, if you make a breaking change it requires you to hunt down all callers, but JavaScript’s dynamic nature does that anyway.

It makes a lot of things easier, too, though it's not without gotchas (the most frequently encountered, IME, one being functions that you normally only use a subset of the arguments combining with functions which take callbacks where you normally use only a subset of the passed arguments interacting in surprising ways; the sibling comment on the map/parseInt combo being a good example.)


I was a late convert to "full stack" myself. Just go directly to Typescript and run a linter on the code before you execute it.

It seems like more than a programmer should have to do, but eventually you'll forget about the extra tools and have decent pseudo-compile-time error detection.


And if your IDE supports it, you can get informed of the errors before a compile (what Microsoft calls “Intellisense”)


That feature is one of the reasons for Javascript's flexibility, many hate JS in beginning but then they start to appreciate the advantages of this features and start to like the language.


We (ab)used the shit out of that kind of thing in SproutCore way back in 2008.

In situations like single-page app building, it's very useful.


It is amazing the amount of brain power put into making this language fast-ish. Too that brain power wasn't used to do other things.


I wonder if Openjdk or other VMs could benefit from a similar optimization.


Java knows at compile time exactly how many arguments are passed to a method, so presumably there's no adapter layer.


Wrong, Java Varargs functions can take spreaded arrays as Var parameters hence the number for this not uncommon case (e.g jdbc apis, format(), etc) is only known at runtime


But in Java (and similar), you have to be explicit about varargs. In JavaScript, every function supports varargs. This will run just fine:

    function print(arg) {
        console.log(arg);
    }
    print("a", "b", "c");
It’ll only output "a", obviously. In fact, this will run just fine too:

    print();
You’ll get "undefined" in the console, but it’ll work.

JavaScript’s nature is that arguments are in the "arguments" pseudo-array, and the parameters are just fancy names for different values in "arguments". See:

    function count() {
        console.log(arguments.length);
    }
    count();
    count("a");
    count("a", "b");
In order, you’ll get: 0, 1, and 2. Despite the fact that "count" has no parameters in the definition.

In the first function ("print"), "arg" is just syntax sugar for "arguments[0]".

What I’m getting at is: in C, Java, etc., the compiler knows what is a varargs function and what isn’t. In JavaScript, the interpreter/compiler doesn’t and has to assume everything is.


> In the first function ("print"), "arg" is just syntax sugar for "arguments[0]".

So much so that they reflect one another: you can set `arguments[0]` and retrieve that value from `arg`, and the other way around:

    function foo(arg0, arg1) {
      arg0 = "foo";
      arguments[1] = "bar";
      console.log(Array.from(arguments), arg0, arg1);
    }
    foo(1, 2)
will log

    [ "foo", "bar" ] foo bar
Although in reality optimising engines treat `arguments` as a keyword of sorts: they will not reify the object if they don't have to. However this meant they had to deoptimise a fair bit before the "spread" arguments arrived as that was how you'd call your "super".



Hence, “pseudo-array”


Great explanation and I think you are right for Java, Kotlin, etc. But what about dynamic languages on the JVM such as GraalJS, groovy, etc?


> Java Varargs functions can take spreaded arrays as Var parameters

That's syntactic sugar for an array, in the most literal way possible: when you define `func(Object… args)` the actual bytecode is that of `func(Object[] args)`, and you can pass in an array to stand in for the entire varargs.

So no, they're right. The varags just counts as 1 formal parameter.


Yep! One can even define the `main` method using either:

    public static void main(String[] args)
    public static void main(String... args)


Note that there's no such thing as spreaded arrays in Java. Instead, varargs are just syntactic sugar for arrays.


I'd imagine not. Assuming the JVM's semantics aren't too different from those of the Java language, you always know at compile time how many arguments a function takes[1], so this optimization wouldn't be relevant.

[1]: Java does have variadic functions, but we also know at compile time which functions have such a signature, so it probably just desugars to a normal array as the final argument.


well, there's also invokedynamic, but call site selection is in user space, so one can implement this optimization for it's own dynamic language without violating the JVM own static invocation rules.


The whole problem this solves doesn't exist in the JVM.


The JVM isn't limited to Java, for example GraalJs is also an ecmascript compliant JS engine


Graal uses a complete different implementation model.


Never heard about nashorn, jruby, groovy then ? It all use the same C2 JIT as Java


None of them relate to Graal, they just pretend to be Java to the JVM.


The whole problem this solves doesn't exist in the JVM.

The JVM isn't limited to Java, for example: nashorn, jruby, groovy

Hence the optimization would make sense for the JVM for those client languages: nashorn, jruby, groovy, etc

Quite trivial to understand, isn't it?


Except Graal isn't the JVM, quite trivial to understand by anyone that knows a little bit about compilers.


Graalvm is an optional component for the jdk that communicate with others (the garbage collector, TLS, etc through the JVMCI AKA jvm component interface. But that doesn't matter at all it is pointless to talk about graal, my other examples such as nashorn/jruby do not use graalvm they use the standard jdk C2 and would still benefit from the optimization from the v8 blog because they have the same semantics as they implement dynamic languages. It is useful for the jdk to implement optimizations that have no use for Java because the jdk is much more than that. I'm being downvoted for stating this truism and educating people.

Edit: off topic but I randomly stumbled on upon an interesting comment about CoreCLR and it happen that it was yours (2019), do you have news regarding the development/improvements of RyuJIT?


Sorry, wrong again, you are the one that need to update your information.

Graal JIT compiler as used in OpenJDK, is a subset of GraalVM, and it has been such a burden to keep in sync with proper GraalVM, that it was decided to drop it and invest into improving C2 instead.

https://bugs.openjdk.java.net/issues/?jql=labels+%3D+graal

Second, nashorn tricks with invokedynamic and pretending to be Java never worked to the extent achieved by V8, hence why its development was dropped and everyone got told to migrate to GraalVM JavaScript.

https://www.graalvm.org/reference-manual/js/NashornMigration...

Third, JRuby always suffered to pretend to be Java to achieve acceptable performance on the JVM, even with invokedynamic.

"JRuby: The Hard Parts with Charles Nutter"

https://www.youtube.com/watch?v=RvUouqLxgrY

And Graal uses a completely other execution model for Ruby, TruffleRuby, https://www.graalvm.org/reference-manual/ruby/

As for CLR, yes there have been plenty of progress, .NET Native (which uses VC++ C2 compiler), CoreRT, and now the upcoming .NET 6 AOT compiler that is supposed to replace them.

There are a series of blog posts related to .NET 5 improvements, including C++ runtime code being replaced by modern C# features, and future roadmap.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: