Hacker News new | past | comments | ask | show | jobs | submit login
Node.js 15.0 (nodejs.medium.com)
217 points by wildpeaks on Oct 23, 2020 | hide | past | favorite | 132 comments

Glad to see "throw on unhandled rejections" make it into Node, finally! I can stop carting around this little bit of code I used in every Node API I wrote:

  process.on('unhandledRejection', (err) => {
Had become as second-nature for me as "set -euo pipefail".

idk if you've ever run into it, but `process.exit` has been a bit of a footgun in our node apps due to broken pipes and (sometimes) async io with `console`

we instead use

    process.on('unhandledRejection', (reason) => {
        throw reason;
which, as long as you havent handled `uncaughtException`, prints the stack trace and aborts.

Very happy to see this too!

In Emscripten we've had to emit a handler like that by default, so that test suites don't silently ignore errors. Being able to not emit it will avoid some code size and annoyances users have had.

Here's a changelog instead of this weird blog post:


What do you find weird about it?

It's on medium for one.

maybe more people would look at it if it were on tiktok

Even though I was NEing to this comment, I feel the quality of the comments is becoming more Redditilized, which saddens me. It used to be that every comment was going into detail on what the point was instead of a sarcastic one-liner (sneer in Dutch)

I think the original comment illustrated the point well enough. A change log should be optimized for practicality not popularity, otherwise we end up in a hellish world where change logs get published as Instagram posts or tiktok videos.

It was completely empty aside from the headings.

It turns out that a content blocker in Safari was causing this. I have no idea why, perhaps because of some class name or ID in the HTML on Medium?

With the addition of workspaces and yarn.lock support to npm 7, are there still reasons to use yarn over npm?

I was happy when yarn first came onto the scene and gave npm the kick in the butt it needed to improve.

Now I wish yarn could be deprecated and we could go back to a single package manager. There's unfortunately segmentation in different areas around package managers, e.g. electron seems to prefer yarn. And for package maintainers there's extra overhead to document and test installation with both npm and yarn.

I hear you, but things are not really moving in that direction, because it's not that simple. The closer you look into what they do and how, the clearer it becomes that [npm7 vs yarn1 vs yarn2 vs pnpm] is the current set of legit choices, for various reasons.

Yarn v2 PnP is simply a lifesaver if you have a medium+ sized monorepo.

We have a monorepo with 180 packages here. Without pnp, it takes 1h+ just to npm install any new third party package in any local package, it’s a joke. With pnp it takes 18s.

So yes, from my point of view NPM is completely inadequate for any serious JS codebase.

We have a pretty large monorepo codebase (460 packages and counting) that we're migrating from yarn v1 to yarn v2. I'll say it's definitely not a plug-n-play migration (pardon the pun).

Some issues we ran into:

- it can be difficult to reason about the layout of peer dependencies. Often times libraries that rely on Symbols or referential equality break and you need to mess with packageExtensions or add resolutions or unplug. Debugging mostly consists of throwing stuff at a wall and see what sticks

- file watching at large enough projects breaks w/ file descriptor exhaustion errors, forcing you to find and unplug the offending libraries

- there's a number of known incompatible libraries (flow being one of the most prominent) and the core team approach to these at this point follows paretto (20% effort for 80% results, e.g. special casing for typescript), meaning I don't believe there will ever be a 100% compatibility milestone w/ the ecosystem

- it's much more black-boxy in terms of day-to-day debugging (e.g. it's much harder to manually edit files in node_modules to trace some root cause)

- we ran into inscrutable errors deep in packages that interface w/ C++ and basically were only able to fix by pinning to an earlier version of a library that did not depend on said package.

- migration cost is heavily proportional to codebase complexity. My understanding is that Facebook gave up on it completely for the foreseeable future and ours has similarly been a significant time investment (and we're not even targeting strict mode yet)

The pros:

- install times and project switching times are indeed fast, even in our codebase that contains multiple major versions of various packages

- yarn v2 has many interesting features, such as protocols (though it's debatable if you want to commit to lock-in to benefit from those)

Regarding TypeScript, I think it's important to point out that we have a working PR in the TypeScript repository that we've been maintaining for about a year now. It's not so much special casing as being ahead of trunk. I still hope the TypeScript team will show interest eventually and we'll be able to streamline the development.

[1] https://github.com/microsoft/TypeScript/pull/35206

I meant special casing in the sense that this a conscious effort specifically targeted at Typescript support, as opposed to some generic design that would cater to a large class of projects.

Mind you, I understand that there are legitimate reasons to approach it this way now (e.g. technical limitations, differences in opinion wrt project governance, cost/benefit on long tail, etc). I'm mostly cautioning the unaware that one shouldn't necessarily expect that every package will work under yarn v2 (though an overwhelmingly large majority does work just fine).

From what I've seen, the "unplug" command is supposed to allow you the ability to temporarily unzip a package so that you can do the traditional "hand-edit a file in a dependency" debugging approach.

Yes, but when you're dealing w/ transitive dependencies, often times you need to jump between many packages. And you then need to clean up after your debugging since you typically don't want to leave things unplugged if they don't need to be (as that affects performance).

I'm not saying it's impossible to debug, just that you end up having to jump through more hoops.

Well, you gotta clean up files you've hand-edited in `node_modules`, too, if you've been adding a bunch of `console.log` statements :)

at least this way it's just deleting the temp package folder or running whatever the "replug" command is, instead of having to go figure out all the files you were editing.

Eh, node_modules hacking is certainly not great by any stretch of the imagination, but once you work with it long enough, there's a bunch of stuff that you just get efficient at. Spamming undos in open files is fairly easy. If the editing ends up being a real fix, then you upstream it and install again. There's also considerations about jump-to-definition and similar tools, etc.

You can't accidentally commit your debugging (unplug edits package.json and there's no replug command) and you don't end up with 3 unplugged folders for the same package (that's a whole can of worms on its own). There's also some yarn 2 specific pitfalls regarding __dirname in local packages, symlinking semantics, etc.

Anyways, getting way too into the weeds here, I better stop now lol :)

I tried yarn 2 on a greenfield project, but discovered:

- pnp is made possible in part by mysterious “patches” to certain dependencies that don’t work well with it. Mysterious as in they’re obfuscated, and there isn’t much detail besides commit history. This is blocking if you wanna try out, say, the TypeScript 4.1 beta and the patch isn’t ready yet. But more importantly um... I do not want my dependency manager mysteriously patching stuff with obfuscated code?????

- it applies these patches even if you disable pnp, so same objections to the entire yarn 2 approach (currently)

So I’m back on yarn 1 and apparently gonna need to look at npm 7 at this point.

I wrote the above before caffeine really kicked in, so I neglected to add: pnp is itself achieved in part by obfuscating your entire dependency tree. That takes a loooot of trust, on a matter where trust has already exceedingly deteriorated. In hindsight, I regret even considering it.

Can you clarify what you mean by "obfuscating the dependency tree"?

They probably mean the idea that once you're in PnP, you can "kind of, sort of" peer into zipfs deps, but not in the same way that was possible in bare node_modules.

That said, I think yarn 2's PnP + zero installs (https://yarnpkg.com/features/zero-installs) is lovely with CI. Instead of tacking 40+ seconds to resolve deps every build, vending deps with PnP on is much cheaper than the node_modules equivalent.

A gigantic single file encoded version of `node_modules` is just modules I can’t go look in.

(Not a real edit): my last gig was with a lot of well prepared juniors, but they lacked the confidence to go look inside dependencies to find out what was happening. I tried to encourage setting breakpoints or logging or whatever felt comfortable in required packages. It was hard.

Turning that into a blob is even more discouraging.

As I mentioned elsewhere in the thread, Yarn v2 does have an "unplug" command that will extract a given dep into a folder for the time being. Does help with that use case.

And if you trust that they’re fundamentally the same thing, that’s a great escape hatch. I personally tried to use two new things together and discovered that one is transparent and one is opaque magic... and given the opportunity to do harm, I found the opacity of one alarming. I don’t trust yarn to manage dependencies in pnp, because what I saw in how they handled a special case was completely black box. Literally binary blob patches with no explanation of what it’s doing or why. Completely impossible to audit without reverse engineering or auditing the entire tool. Why would I trust “unplug” to do anything but misdirect?

What "binary blob patches" are you referring to?

FWIW, if I wanted to confirm whether an "unplugged" package had been modified, I'd just download the original tarball from NPM, extract it, and diff the two folders.

I mean the way that yarn 2 “installs” typescript is by patching it with some manually maintained base64 blob that (I assume) corresponds in some way to the base64 blob that pnp produces. Both are probably something you can reverse engineer... if that’s how you want to trust your package manager I guess? Idk I only learned that the patching was a thing because it failed when I tried to install an “unsupported” package. I was alarmed by trying to track down what was happening and saw the patch has no explanation. I was more alarmed when yarn2 tried to apply the patch even with pnp disabled.

Hmm. Okay, digging around in the Yarn repo, I see this "generate patch" setup code [0]. Looks like they're trying to cherry-pick some specific commits from the TS repo based on the TS version, and specifically apply them to the TS file.

The "base64" bit is referenced here [1].

I would assume this specifically relates to the fact that TS does not have native support for Yarn PnP as a filesystem approach. The Yarn team has been keeping an open PR against TS [2] and trying to convince the TS maintainers to merge it, but it hasn't happened yet.

A bit odd, and I can understand why you're concerned, but it also looks like there's a very understandable reason for this.

I would have assumed that this doesn't get applied if you install TypeScript via the Yarn v2 `node_modules` linker, but would have to try it out and actually see.

[0] https://github.com/yarnpkg/berry/blob/f384f0f40e87d636e4021b...

[1] https://github.com/yarnpkg/berry/blob/f384f0f40e87d636e4021b...

[2] https://github.com/microsoft/TypeScript/pull/35206

This blob is literally our open PR, applied to the various TS releases. You can rebuild it using `gen-typescript-patch.sh` (we actually have a GH Action that does this on CI, to prevent malicious uncontrolled changes), and the sources are auditable in my PR.

Note that it gets applied regardless of the linker since it would cause the cache to change when going from a linker to another, and we wanted to make the experience smoother, but that it's a noop for non-PnP environments.

It definitely applies the patch even with the node_modules linker. That was why I gave up and went back to yarn 1

Sorry if this makes it harder but I honestly recommend reading up on pnpm (https://pnpm.js.org/) before committing to npm7. Npm7 auto-installs peer dependencies(!) and pnpm has some remarkable advantages over npm or yarn.

Pnpm is indeed better than npm but I found it’s symlinking approach less compatible than yarn v2, (nextJS for example didn’t support pnpm until very recently) while also having less deterministic module resolution, creating version compatibility problems that disappeared with yarn v2.

I had already been meaning to look at it but had written it off because I wanted what yarn 2 was selling... but definitely gonna give it another look.

Did you try out pnpm, by chance? I’ve read a few good things, but it doesn’t seem to get mentioned all that often. So I’m curious what people with larger projects think about it.


I use it in some rather large repos and am a very big fan. It's simply fantastic for every type of repo and project I've thrown it into.

I did try it but it caused two problems as compared to yarn v2: the dependency resolution algorithm seems less deterministic or strict, causing version incompatibilities that yarn v2 did solve, and also the symlinks are poorly supported by many tools (nextJS until very recently, react native etc...). Also the install is longer with pnpm. However it has less runtime performance impact.

Cannot figure out why you are being down voted. YarnV1 and NPM are horrible if you have a large dependency tree. YarnV2 was the first time I enjoyed the package manager.

Likely because he indirectly said that projects using less then 180 dependencies aren't serious.

could it be that in some languages (like my language for example) serious is kinda of a synonym for large?

I've been downvoted at times for using it to mean exactly that, but I can't help it after more than 40 years of thinking in a language different from English.

By serious I meant large indeed, I’m not a native speaker. I have some serious projects that have less than 180 packages too aha.

But if you start an ambitious company with a JS codebase today, start with yarn v2, you’ll save yourself some pain in the future.

I'd push back against "serious JS codebase", there.

Maybe it's just me but a monorepo with 180 packages sounds like a hole you've dug yourself into and you're propping yourself up with yarn.

I certainly don't think that anyone who keeps their packages separate (you could do that even within a monorepo, surely) has a "non-serious" codebase.

Well yarn exists for this purpose so I guess I’m not the only one doing this. And if the alternative involves having to manage independent versioning of 180 packages and their inter-dependencies then no thanks.

I’m not saying the situation is completely perfect (yarn v2 had its rough edges in the beginning for example), but it’s not too bad either. This monorepo is the best organized codebase of this size and diversity I’ve ever seen.

Feel free to explain alternative methods to manage 180 packages with 7 developers while sleeping at night.

I don't know why you are down voted but I agree here. PNP is so good it takes less time less storage. Yarn v2 is very superior than yarn v1/npm

Yarn 2 is very nice. The portal: protocol is great and it works well with Nix package manager.

Some would say the same about NodeJS especially since Deno exists.

This NPM cache been a huge time-saver for me, you can run it locally or across your whole network for a shared cache -



From https://blog.npmjs.org/post/621733939456933888/npm-v7-series...

> The package-lock.json file is not only useful for ensuring deterministically reproducible builds. We also lean on it to track and store package metadata, saving considerably on package.json reads and requests to the registry. Since the yarn.lock file is so limited, it doesn’t have the metadata that we need to load on a regular basis.

So I guess there are some performance benefits with npm 7 compared to Yarn 1?

And interestingly, Yarn 2 actually goes quite a long way off what a lot of Node users wanted from it originally (at least we have no interest in moving to it).

If you just use "yarn" as you'd think of it, you are probably still using Yarn 1, so I guess it's being thought of as a different parallel project

Are there reasons to go back to npm? I switched back when yarn came out and haven't looked back. Been super happy with yarn. Can't say the same about npm.

People are more likely to already have npm installed and to be familiar with it. So there's an argument to be made that all else being equal, picking npm lowers the barrier to entry for new contributors. This consideration could be especially important for open source projects.

That's a valid point, but I don't think the barrier is particularly high. I've done the switch from npm to yarn once. It was a process measured in hours to understand the differences. It's not like Git vs Subversion or something like that.

> Are there reasons to go back to npm?

Ships with Node.

I don't think that's very compelling, versioning-wise (it's still independently versioned). Futhermore, the official node docker images come with yarn pre-installed, and there appears to be no way to bundle in a specific npm version in source control, like you can with `yarn policies set-version` (v1). That has worked wonders for us. Before yarn we used to have problems with developers using different versions of npm on their machines/build agents, and .nvmrc/"engines" doesn't help you there other than being an "error gate". The yarn executable acting like a shim delegating to the checked-in version is brilliant for versioning (especially CI).

Npm is very buggy for local dependencies - eg in a monorepo that contains node modules. That might be fixed in npm 7, but I doubt it.

OTOH, yarn handles this just fine.

I've been trying the npm workspaces support in the last few beta and release candidate releases and it just wasn't stable enough for me.


No. Use pnpm (and Volta js) instead.

I've been considering switching to pnpm for political reasons since using open source projects that are ultimately at the mercy of big corps (npm > Microsoft, yarn > Facebook) makes me slightly uneasy. But I'm hesitant to because pnpm seems so new.

Have you encountered any regularly occurring issues or headaches regarding pnpm?

FWIW yarn v2 isn't affiliated with Facebook. The lead maintainer is a Datadog employee, but the project isn't at the mercy of any company.

Thank you. I was not aware of this. Also, last I heard transitioning from yarn v1 to v2 was not straightforward. Do you know if this is still the case?

FWIW, I recently tried a branch where I migrated our existing repo from Yarn v1 to v2.

The immediate issues I ran into were lack of Yarn v2 support for some features critical for internal enterprise usage: no support for the `strictSsl` / `caFile` config options from NPM / Yarn v1, and an inability to read lockfile URLs that were pointing to an internal NexusRepository instance for proxying NPM package installation.

Both issues were resolved very quickly by the Yarn team. I then ran into a problem where the post-install build steps could not run in a locked-down corporate security environment, and that issue was also addressed very quickly, with the Yarn team putting up a PR that tried different process launching approaches and iterating until one worked for me.

Having sorted out those issues, I was able to move on to actually following the steps in the Yarn v2 migration guide [0]. The steps worked basically as advertised. The `@yarnpkg/doctor` tool identified several places where we were relying on imports that hadn't been strictly declared, so I fixed those. Starting up the app caused some thrown errors as other non-declared imports were hit, so I kept iterating on fixing those.

I also used the `@yarnpkg/pnpify --vscode` option to generate some kind of settings file for VS Code, and added the suggested "zip file system" extension to VS Code. That allowed me to right-click a library TS type, "Go to Definition", and show a file that was still packed in a package tarball.

I had to switch off to other tasks and haven't had time to go back and finish trying out the migration. But, parts of our codebase were running correctly, and it looked like I just needed to finish out the process of checking for any remaining non-declared dependencies.

Can't vouch for how this would work out in production or a larger build setup, but things looked promising overall.

[0] https://yarnpkg.com/advanced/migration

I've had a few issues using pnpm with other tools (Renovate, Dependabot, etc.) but at least with Renovate the issues have / are being worked out. I'm happy with pnpm so far and will continue to adopt it incrementally as it's popularity grows.

Using pnpm for few months on 'serious' codebase if I may use same wording as my predecessors. No issues so far.

Can Node eventually fix the problems that coaxed Ryan Dahl to move on to Deno?

In his 2018 talk, "10 Things I Regret About Node.js" https://www.youtube.com/watch?v=M3BM9TB-8yA&vl=en he identifies seven (not ten) regrets.

1. Not sticking with Promises: This is changing, slowly. You can `import {readFile} from 'fs/promises'` in Node and it works as you'd expect, including top-level await. (Backwards compatibility means the callback API can never go away.)

2. Security (your linter shouldn't have complete access to your computer and network): Deno hasn't done a great job with this, either. You can restrict the access that a Deno process has, but you can't restrict the access for individual modules. If any module in your server needs to access something, then every module in your server can access it.

I predict that module-level authorizations will be solved some day by browser vendors, and that Node and Deno will adopt the thing. Deno will probably have to throw out their thing what that happens.

3. Build system (GYP). This has no effect on userland Node developers. You build node with make. Another build system could be adopted, but I think nobody's bothered. Deno has a protobuf FFI to communicate with V8. You can do that with Node if you want. Shrug.

4. require("package") relies on package.json. Deno uses import maps. Node will probably honor import maps someday, too.

5. node_modules: He said it "complicates the module resolution algorithm." Meh. He also points out that node_modules is too large, but that's a Node cultural problem. Deno's community is still small, but it will have that problem, too, except it will have a large shared "cache" instead of a large local node_modules folder.

6. require("module") without the extension ".js": Deno does this, too, using import maps. It's fine.

7. index.js: Again, it "complicated the module loading system." Meh?

> You can `import {readFile} from 'fs/promises'` in Node and it works as you'd expect, including top-level await

I can't believe I missed that. I've still been writing promise wrappers like a fool.

As a user land Node developer I’ve had tons of problems with GYP and packages I install trying to use it. It absolutely leaks into the developer experience.

Things should be getting better with the increasing adoption of N-API. Though it‘s still a long journey.

Wait, there is a fs/promises module? I still use Bluebird.promisify in every recently project :facepalm:

Node also ships "util.promisify" in its core https://nodejs.org/api/util.html#util_util_promisify_origina...

I'm not sure why this was downvoted, but I think the answer is basically: yes, but only by making a lot of breaking changes. Which sort of leads you to Deno anyway. My take is that we probably won't see a wholesale change to the way things like file/network access works, package manager centralization, etc. anytime soon.

I don't know why this is being downvoted. Ryan Dahl, the original developer of Node, moved on to build Deno.

Ryan definitely deserves credit for the initial development of node.js. But that phrasing makes it sound like he moved from node to deno - which isn't the case. He moved from node to other projects. Over the total lifetime of node, Ryan was _not_ involved for far longer than he _was_ involved (even when only counting the time before Deno was created).

The top-level comment seems to suggest that Ryan was an integral part of node development before deno and that "getting him back" was relevant. Which isn't what happened. There are people who worked on node and moved to deno. But Ryan isn't really one of them.

It’s being downvoted because the Node community seems to have a hard time accepting constructive criticism... which is ironically also at the core of much of what’s wrong with Node in general.

Can you elaborate on this? I've always felt that the Node community, and the wider JavaScript community alongside it, has always been open to and embracing of constructive criticism, whether or not it's regarding the language itself (TC39, codemods / babel transformers, TypeScript, and any language that targets (Node)JS's ecosystem), established frameworks and libraries (underscore -> lodash, moment.js -> luxon), package management (pnpm, yarn), or even governance (the io.js fork, which was later on merged back into Node) and module systems (es modules, commonjs, et cetera).

Ryan's original criticisms regarding Node were totally valid, but most of them weren't really easily addressable without significant breakage or a long migration strategy, which potentially could've caused a _lot_ of issues and unclarity for many years.

Constructive criticism is specific and helpful. OP is just kvetching.

Support for ||= and ??= is very exciting!

Gee really? I feel like this is just another set of symbols I will have to mentally parse and remember what they mean. Sometimes too much shorthand is a thing.

On the other hand, the syntax for _operation_ assignment operators are pretty well standardized though.

Perform operation and save value.

I'd argue every operation should have already had it's respective assignment operator to be more consistent.

If one were to use GitHub stars as a metric, I'm particularly interested to see that deno is catching up and in some regards surpasses Node.js' metrics.

Life shows that usually one can't catch lightning in a bottle twice.

Those logical assignment operators look pretty neat. I wish they went into detail about how they work and how to use them, though.

Here's more information on the 3 new logical assignments

Logical OR assignment (||=) [1] Logical AND assignment (&&=) [2] Logical nullish assignment (??=) [3]

An example for OR assignment:

  let a = '';
  a ||= 'hello';
  console.log(a); // prints 'hello'
  a ||= 'not this';
  console.log(a); // prints 'hello'
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... [2]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... [3]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Finally. I missed that idiom from Ruby so much.

Could be used for ubfuscation

has anyone aliased npm to run nvm first, and then run npm, passing in the arguments?

or have people made nvm run when their system or terminal window starts so that they don't have to remember to do it

You can run nvm as part of your prompt so you pick it up when you cd into a directory with an nvmrc. In your bash_profile:

    UPDATE_NVM='[ -e .nvmrc ] && [ v$(<.nvmrc) != $(node --version) ] && nvm use'
    PROMPT_COMMAND="$UPDATE_NVM; ...other prompt commands..."

I use nix (nix-shell) for that, it's basically universal package and env manager. You create a config file, put it in directory, and wherever you are in that directory doing `nix-shell` will put you in correct env, not just in js. You could also script it basically however you like.

Does that work fine in cron and other way of invoking programmatically?

Many of these version managers always simply can't be involved outside interactive shells because they require unit script to be loaded first.

I did not need to work with this use case, but from what i understand about it, you configure cron in nixos in a declarative way and nix configures cron[0], but you will most likely see no issues with scripting it anyway, there's a whole language and it can interop with other languages

[0] https://nixos.wiki/wiki/Cron

I use asdf-vm, which automatically sets up your binaries when you enter a directory that needs them (provided they're set / described on a .tool-versions file). It's really neat, especially because you can check the tool-versions file into version control.

I use "n", not "nvm". No futzing around with .bashrc, just put stuff in /usr/local as God intended.


We combine direnv with a .nvmrc

With fnm, you can use a .node-version file that auto-switches to the correct version when you change directories to the project.

> You can expect new releases of Node.js 15 approximately every two-weeks,

In this state, shouldn't it be Node 0.15?

No? Node.js 15 is "current" - as in it is essentially the "develop branch." If you take a look at the release cycle, 14 will start LTS in a few days, and 16 will start as LTS next year in April. While in LTS they only receive security updates as well as critical bug fixes.

I agree, but for the reason that going to 0.15 from 14 wouldn't make sense. I believe the master, v14.x, and v15.x branches are diverging anyways, each with their own development. They snapshot the master branch at some times into a new branch as the next master version, and do patches on that branch. So multiple branches are concurrently being developed. But the master branch, not v15.x, would be the canonical "develop branch".

Is it just me or has JS programming in pure Node become too complex?

And, frustratingly, the line where Node ends and frameworks begin are too blurry. This is made worse by the fact that thousands of bloggers have a node server tutorial, essentially drowning out the good ones.

No? Can you provide examples? On a fundamental level, Node is effectively exactly the same as it was in the v4 days etc.

I also don't see how it could be possible to find the line between Node itself and frameworks ambiguous - unless the framework itself is invoking your JS files, in which case I would recommend moving to something lighter (i.e. something more designed for composition - where you call it, rather than it calling you)

Maybe he means that frameworks like Express have become more modular over time, which might confuse people where that line between node and framework functionality is.

Express really exemplifies the "composition over heavy framework" concept though - you never really hand over any control to it

Agreed. I've really come to love Express and the direction its gone after years of using Rails- which eventually left feeling like a prisoner to its opinionated nature.

I don't use frameworks so there is no such blur for me. I also don't find Node any more complex now than it used to be even though there are more features and APIs. I just stick to the original callback style of writing code, stay true to Node's APIs, and use TypeScript.

The most challenging thing for me in Node right now it upgrading an application with my HTTP services to use HTTP 2 with binary streams.

why would one stick with callbacks? i understand why people might skip promises... but async/await????

I love async/await but I really wish it wasn't built on promises.

All you would have had to do is make it so when you use the await keyword, you don't pass the callback, the runtime passes it for you, pauses the function and resumes it when the callback is called (returning values and throwing errors as you would expect). I implemented this here: https://github.com/bessiambre/casync but it would be even better with language level support.

Whenever I encounter JS beginners and they ask me about simple looping over asynchronous operations sequentially, I mention async/await, the discussion veers towards promises and then I have to mention the state machine and the caching layer for errors and return values, pending, fulfilled, rejected, settled states, reject/resolve callbacks and how it all fits together with 'then' chaining and 'dynamic replacement' by which point they just want to go back to Python or whatever.

With a callback based async/await, I could just say: Here, with this keyword, the function will pause until the callback is called. Just put your function taking a callback in a normal loop and await it. It would be cleaner, more functional, more stateless and faster.

The features of promises aren't even used with async/await. The point of promises, the caching layer and the state machine, is to be able to add the continuation later but with async/await, it is always just the next line in the function so it doesn't need to be added later. It's unnecessary performance penalty, complexity and statefulness on every asynchronous function call.

I did try to propose this as a language feature here: https://es.discourse.group/t/callback-based-simplified-async...

It didn't get much traction an I was too busy to push it further.

We use callback style at work, with heavy use of async.auto ("async" the node_module). One of the greatest function that npm can get you :)

I'm a scala person, so I naturally tend to like promise style (and async/await is a nice sugar) but my experience interviewing a lot is that people that were not exposed a lot to callback style don't understand what asynchronous means, which is a good enough reason IMO to keep callbacks :) the fact that the std lib is also full of them (we are using node 12, I don't know what is the current status, but for now most of the std lib is callback only for us) does not make me want to do the switch at all.

Hahaha that's smart way of filtering out bad candidates.

Maybe during the interview process we should make sure the candidates fully understand the callback style and Javascript's async model.

We also have a big project. we started during callback days, skipped promises (I never thought promises alone were an improvement over callbacks and async.auto). But as soon as async/await was in, we allowed them in the codebase.

So now the codebase is a mess, with some functions being callback style and some others promises, stiched together we promisify functions. Im hoping in a 2-3 years time frame all our callback style functions will be eventually phased out during small refactors and rewrites.

This migration plan is the only method i found that enables projects to move from A -> B without spending massive times rewriting the whole project and yet keeping up to date with technologies so they wouldn't need a rewrite every 10 years.

> Maybe during the interview process we should make sure the candidates fully understand the callback style and Javascript's async model.

I used to ask candidates to implement async.map and I used to be baffled that the majority of JS devs are unable to do it. There are those who would say "I'm not used to callbacks, I use promises". I would then allow them to use promises (no Promise.all allowed, that's what we are trying to implement ^^). I have never seen one of those "promise only candidate" manage to do the exercise.

If you want to push it to the next level, ask them to implement async.mapLimit. You wouldn't believe how few people that are looking time professional JS devs, are actually able to do it properly (I actually used to ask that one, before realizing that it was too much to ask... )

These are actually pretty awesome questions to ask in an interview. Thanks!

Logically its all the same at execution time, but with callbacks the control flow is more clear to read for me.

For another perspective, I skip async/await and prefer promises ;)

i like promises too and was one of the most vocal supporters. but they are alien to a lot of people. at least that's what I experienced so far :( async/await just feels right at home for everyone

The trouble with async/await is to use it properly you really need to understand promises. For example, many people are stumped why you can't await inside of Array#map. I've always felt it's a leaky abstraction and I personally prefer to just use promises most of the time. I do find async/await a great fit for say puppeteer tests where the entire test is a long run of blocking async calls.

> many people are stumped why you can't await inside of Array#map

Isn't that solved with a simple google search? Async/await makes the function return promises, so if you use an async callback in Array#map you end up with an array of promises, which you can use with Promise#all. Seems rather simple to explain and grok.

Now you are just confirming that you need to understand promises to use async.

I agree with that. What I'm saying is it's really not that hard of a subject to understand or teach. At least not enough to prevent me from using it entirely.

I don’t know much about compiler internals, but this sounds like it could complicate a lot of things and not have as many guarantees that the compiler could assume (thus since JS interprets code it could slow down runtime performance). I don’t know though maybe someone more in the compiler space could answer.

I will always cherish the 2 years I spent doing nodejs work. I gleaned a new appreciation for stuff I've hated for a long time. npm helped me realize that maven is merely bad, no unforgivable. For instance.

I still can't believe the nodejs universe doesn't have back pressure (eg caolan/async) baked into the stack. Those were some pretty difficult conversations. My teammates had no idea what I was talking about or why my fixes worked.

Absolute paths for importing modules (requires) still makes me laugh.

But criticizing nodejs, js, etc is rather pointless. Like PHP, it's fractally wrong.

I don't get Node JS. It seems like you just need to somehow know the function for a specific version of it, and stuff changes all the time. And there's not really a good auto-suggestion.

You need to know the args, and it just seems "hacky". Like it's good to write something small when you know what functions you use etc... but just not stable for big stuff.

I can’t really agree with this, especially not since the advent of typescript. There’s not a dynamic language in existence with better tooling than JavaScript. You open up a js file in vscode or webstorm or whatever, and the typescript language server kicks in so you get type hints for all your code. If you switch to typescript it’s a whole other level of type safety.

Also, it seems like your comment could be generalized to include all dynamic language runtimes, not just Nodejs.

Yeah i guess. I don't really know that much about it anyway. I like Java, the IDE is so rich for it. Just makes writing so easy.

Node.js still implements CommonJS callback-style APIs (for web, file i/o etc) and module loading, specified in 2009 or so. "Stuff changing all the time" really isn't doing justice to Node.js. IMO, the Node.js API turned out to be very much on the stable side of things, yet also sports eg upcoming QUIC support

That's not even remotely true. Node's core library stability is exemplary - it is near-impossible to make breaking changes to it, even across many major versions.

You can set up most IDEs to get excellent auto completion - VS Code does a good job of that kind of thing.

That's more of a JS issue than Node issue. The problem you are describing is one of 10 years ago, but not so much today, so long as you have a decent IDE. Intellij is the best IDE out there for autocompletion/intellisense of JS.

> And there's not really a good auto-suggestion. You need to know the args

IDEs do spoil people.

I can't write dynamic languages because of this. I would often not know what type a function expects or returns if it wasn't for compile time suggestions (which in turns allows language servers).

Makes scripting languages really hard for me to use as a consequence.

I think this is one of the main reasons why TypeScript got so popular, the other reason being the excellent support for it in Visual Studio Code. Before adopting TypeScript, I'd have to read documentation in a wide variety of documentation styles and standards, and then manually ensure that I was calling the right functions with the right arguments (or - alternatively, if I was lazy - I'd just write some shim code and attach a debugger to figure out the call signatures of callback functions). With TypeScript and type hints installed for the libraries I'm using, instead I just let my editor hand out typing information and autocomplete hits, and let the typescript compiler do type checking.

If anything, TypeScript sometimes feels like a nice middle-ground between C# and JavaScript (and Java?), and though it's not perfect, I do feel that it's pleasurable once you get the hang of it and the quirks of the ecosystem.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact