Hacker News new | past | comments | ask | show | jobs | submit login
A one-line package broke `npm create-react-app` (github.com)
599 points by tessela 39 days ago | hide | past | web | favorite | 459 comments

Digging into the reason behind breakage, the change is this one: https://github.com/then/is-promise/commit/feb90a40501c8ef69b...

Which adds support for ES modules: https://medium.com/@nodejs/announcing-core-node-js-support-f...

However the exports syntax requires a relative url, e.g. ‘./index.mjs’ not ‘index.mjs’. The fix is here: https://github.com/then/is-promise/pull/15/commits/3b3ea4150...

From those commits, it seems this issue was fixed in 1h12min. That should be a new record, specially considering this is all volunteer work on a Saturday. While it's bad that things break, the speed at which this was fixed is truly amazing. A big thank you to everyone involved here.

Not sure where you're getting 1h12 from. First issue was reported at 12:18pm (my time) final update that fixed it was published at 3:08pm.

Not that long, but my issue with this release snafu is that:

- the build didn't pass CI in the first place

- the CI config wasn't updated to reflect the most recent LTS release of node

- the update happened directly to master (although that's to how the maintainer wants to run their repo. it's been my experience that it's much easier to revert a squashed PR than most other options)

- it took two patch versions to revert (where it may have only taken one if the author could have pressed "undo" in the PR)

This is a good example of how terrible messy JavaScript library creation is.

There is no change to the actual functionality of the library. Only in the way it is packaged, here to support something that is an "experimental" feature in node.

It is also something that is hard to write automated tests for.

> This is a good example of how terrible messy JavaScript library creation is.

Meanwhile over in .Net-land, after 15+ years of smooth sailing (5+ if you only count from the introduction of NuGet), the transition from full framework to .Net Core has made a multi-year long migraine out of packaging and managing dependencies.

I ran into multiple scenarios where even Microsoft-authored BCL packages were broken and needed updates to resolve only packaging issues. It's a lot better now than during v1.x days, but I still have hacks in my builds to work around some still broken referencing bits.

I wonder why people won't use yarn zero installs. They are great for having a reproducible builds and can work offline. You can have a CI and git hook which checks your code before deployment or pushing to git.

Another way is to pin down the specific versions without ~ or ^ in the package.json so your updates don't break stuff.

What's "yarn zero installs"? Googling did not do it for me.

That might be referring to Yarn's "offline mirror" feature. When enabled, Yarn will cache package tarballs in the designated folder so that you can commit them to the repo. When someone else clones the repo and runs `yarn`, it will look in the offline mirror folder first, and assuming it finds packages matching the lockfile, use those.

This takes up _far_ less space than trying to commit your `node_modules` folder, and also works better cross-platform.

I wrote a blog post about setting up an offline mirror cache a couple years ago:


Used it on my last couple projects at work, and it worked out quite well for us.

That's quite interesting, although back in the day we did that for C dependencies that weren't packaged well, and it quickly ballooned the size of our repo since git has to treat tar balls as binaries. Even if you only update a few lines of the dependency for a patch version, you re-commit the entire 43 MB tarball (obviously that depends on the size of your tarball).

You could use Git LFS to store anything ending with a tarball extension. It's pretty well supported by most Git servers (I know GitHub and GitLab support it off the top of my head). You do need the LFS extension for Git to use it.

The other similar approach is to build in containers - and use Docker layers to contain the dependencies.

verdaccio aims to do this as a proxy: https://github.com/verdaccio/verdaccio

Instead of node_modules containing source code of the packages, yarn generates a pnp.js file which contains a map linking a package name and version to a location on the disk, and another map linking a package name and version to its set of dependencies.

All the installed packages are stored in zip form in .yarn/cache folder to provide a reproducible build whenever you install a package from anywhere. You can commit them to version control. Unlike node_modules, they are much more smaller in size due to compression. You will have offline, fully reproducible builds which you can test using a CI before deployment or pushing code to repository


This is a great feature I did not know about, thanks

I don't understand how it applies to the OP problem. Even without "zero installs", yarn all by itself with a yarn.lock already ensures the same versions as in the yarn.lock will be installed -- which will still be a reproducible build as long as a given version hasn't changed in the npm repo.

(It looks to me like "yarn zero" is primarily intended to let you install without a reliable network and/or faster and/or reduce the size of your deployment artifacts; but, true, it also gives you defense against a package-version being removed or maliciously changed in the npm repo true. But this wasn't something that happened in OP case was it? A particular version of a particular package being removed or changed in repo?)

In this case, it was a new version that introduced the breakage, not changed artifact for an existing version. AND the problem occurs on trying to create a new project template (if I understand right), so I thin it's unlikely you'd already have a yarn.lock or a .yarn/cache?

Am i missing something? Dont' think it's related to OP. But it's a cool feature!

FWIW, yarn.lock (and the lockfile for recent versions of NPM, IIRC) also keeps package hashes-- so a build is either fully reproducible and pulls down the same artifacts as the original, or it fails (if an artifact is missing or has changed).

`yarn zero` protects you against dependencies disappearing, and lets you install without network connectivity.

No. It wasn't meant for OP (is-promise) because that would require tests for the imports.

I saw some work around changing versions in the package.json and lockfiles in the github issue. Instead of that, you could just roll back to the previous commit. Way easier. The package author also changed the earlier version after fixing it.

It would stop your shit from failing at least.

That's awesome. Thanks!!

Google "yarn plug and play", rather than "yarn zero installs". There isn't much in the way of details outside of the main Yarn website -- now focussed on Yarn 2 -- which has the documentation (vs Yarn 1.n, which does not have plug and play and works the same as NPM, and has now moved to classic.yarnpkg.com)

(Edit: I'm not quite sure how this would have completely prevented the issue? P'n'p is very good and seems to be a real step forward for JS package management but surely the same issue could have occurred regardless?)

- we’ve stopped using ^ and ~ because of the unpredictability of third party libraries and their authors’ potential for causing our own apps to break. We also find ourselves forking and managing our own versions of smaller/less popular libraries. In some cases, we’ve chosen to reimplement a library.

Isn't this all stuff that you add after generating the project? For example yarn.lock is created on your first install. Having a pre-generated yarn.lock is a no-go because of the dubious decision to include the full path to the registry the package was sourced from.

I’d argue that ‘index.mjs’ is a relative URL.

Digging requires depth. 1 line modules aren’t depth.

The problems that beset the Javascript ecosystem today are the same problems that beset the Unix ecosystem, back in the 90s when there still was one of those. TC39 plays the role now that OSF did then, standardizing good ideas and seeing them rolled out. That's why Promise is core now. But that process takes a long time and solutions from the "rough consensus and running code" period stick around, which is why instanceof Promise isn't enough of a test for things whose provenance you don't control.

Of course, such a situation can't last forever. If the idea is good enough, eventually someone will come along and, as Linux did to Unix, kill the parent and hollow out its corpse for a puppet, leaving the vestiges of the former ecosystem to carve out whatever insignificant niche they can. Now the major locus of incompatibility in the "Unix" world is in the differences between various distributions, and what of that isn't solved by distro packagers will be finally put to rest when systemd-packaged ships in 2024 amid a flurry of hot takes about the dangers of monoculture.

Bringing it back at last to the subject at hand, Deno appears to be trying to become the Linux of Javascript, through the innovative method of abandoning the concept of "package" entirely and just running code straight from wherever on the Internet it happens to live today. As a former-life devotee of Stack Overflow, I of course applaud this plan, and wish them all the luck they're certainly going to need.

The impetus behind "lol javascript trash amirite" channer takes today is exactly that behind the UNIX-Haters Handbook of yore. I have a printed copy of that, and it's still a fun occasional read. But those who enjoy "javascript trash lol" may do well to remember the Handbook authors' stated goal of burying worse-is-better Unix in favor of the even then senescent right-thing also-rans they favored, and to reflect on how well that played out for them.

And your example is why we have the "lol javascript trash amirite" chorus, because as you've noted these problems were solved decades ago. Yet for some reason, the JS and npm ecosystems always seem to have some dependency dustup once or twice a year.

Yes, that's largely my point. I'm not sure why it is surprising to see an ecosystem, twenty-five or so years younger than the one I compared it to, have the same problems as that one did twenty-five years or so ago.

In one of Robert "Uncle Bob" Martin presentation you may find the answer. The number of developers duplicates each 5 years. That means that any point in time half of the developers have less than 5 years of experience. Add to that realization the fact that inexperienced developers are learning from other inexperienced developers and you get the answer on why we repeat the same mistakes again and again.

I guess that is a matter of time that the reality changes, we will not duplicate the number of developers indefinitely and experience and good practices will accumulate.

Taking into account the circumstances, we are not doing so badly.

> The number of developers duplicates each 5 years

You probably mean "double" here, but the bottom line is that there is zero data to back up that claim.

He literally made up that number out of thin air to make his talk look more important.

Let's say it's 10 years, or make it 15 years, for the sake of the argument.

How does that change his original argument?

It should be fairly simple to look up people describing themselves as developers in the census data I think?

Does the census actually track that? I just did the questionnaire last night online and it didn't ask me anything about my occupation.

Or did you mean something other than the US Census (e.g. GitHub or Stack Overflow or LinkedIn profiles)?

The long form asks about your line of work. Most people get the short form.

No, I meant the US census (or whatever national census), I didn’t actually check if they asked that since it seemed like such a basic thing :/ sorry.

Not as easy as you might think, since “developers” isn't particular to software and software developers have lots of other near-equivalent terms, the set of which in use changes over time, and many of them aren't unique to software, either.

OTOH, historical BLS data is easy to look up.

Do you have the source? Sounds like an interesting talk.

I found it! :) I has a lot of content and insights.

"Uncle" Bob Martin - "The Future of Programming"


That's not the source, it's the claim.

There is zero evidence for his claim that the number of developers double every five years.

Off the top of my head, coding boot camps

Pardon me if I've misunderstood you. I feel that this line of reasoning that excuses modern Javascript's mistakes on the basis of it being a young language to be spurious. We don't need to engineer new languages that recreate the mistakes of previous ones, or even worse, commit entirely new sins of their own. It's not like no-one saw the problems of the Node/JS ecosystem, or the problems of untyped languages, coming from a distance. Still, Node.js was created anyway. I would argue that it, along with many of its kindred technologies, has actually contributed a net deficit to the web ecosystem.

Okay, then, argue it.

That line of reasoning suggests progress isn't being made and we are just reliving the past.

There are multiple reasons for this failure mode, only some of them subject to social learning.

Part of the problem is a learning process, and indeed, I think the Javascript world should have learned some lessons - a lot of the mess was predictable, and predicted. Maybe next time.

But part of the problem is that we pick winners through competition. If we had a functional magic 8-ball, we'd know which [ecosystem/language/distro/OS/anything else] to back and save all the time, money and effort wasted on marketplace sorting. But unless you prefer a command economy, this is how something wins. "We" "picked" Linux this way, and it took a while.

It's also not a surprise to see a similar process of stabilization play out at a higher layer of the stack, as it previously did at a lower one. Neither is it cause for regret; this is how lasting foundations get built, especially in so young a field of endeavor as ours. "History doesn't repeat itself, but it often rhymes."

25 years is roughly one generation. A new generation grows up, has no memory of the old problems?

Same with Covid, is roughly 20 years ago and people forgot there was SARS.

It ain't surprising, but rather just disappointing, that an ecosystem can't or won't learn from the trials and tribulations of other ecosystems.

EDIT: also, Node's more than a decade old at this point, so it is at least a little bit surprising that the ecosystem is still experiencing these sorts of issues.

Is it really though? Node is infamous for attracting large groups of people with notoriously misguided engineering practices whose egos far surpass their experience and knowledge.

I've been stuck using it for about 4 years and it makes me literally hate computers and programming. Everything is so outrageously bad and wrapped in smarmy self congratulating bullshit. It's just so staggeringly terrible...

So these kind of catastrophes every few months for bullshit reasons seem kind of obvious and expected, doesn't it?

NIH Syndrome is a double-edged sword that persists regardless of innovations.

This analogy doesn't hold up at all.

The UHH is a fun read, yes, but the biggest real-world problem with the Unix Wars was cross-compatibility. Your Sun code didn't run on Irix didn't run on BSD and god help you if a customer wanted Xenix. OK, you can draw some parallel here between React vs. Vue vs. Zeit vs. whatever.

But there was also the possibility, for non-software businesses, to pick a platform and stick to it. You run Sun, buy Sun machines, etc. That it was "Unix" didn't matter except to the software business selling you stuff, or what kind of timelines your in-house developers gave.

There is no equivalent in the JS world. If you pick React, you're not getting hurt because Vue and React are incompatible, you're getting hurt because the React shit breaks and churns. Every JavaScript community and subcommunity has the same problem, they keep punching themselves in the face, for reasons entirely unrelated to what their "competitors" are doing. Part of this is because the substrate itself is not good at all (way worse than Unix), part is community norms, and part is the piles of VC money that caused people to hop jobs and start greenfield projects every three months for 10 years rather than face any consequences of technical decisions.

Whatever eventually hollows out the mess of JS tech will be whatever figures out how to offer a stable developer experience across multiple years without ossifying. (And it can't also happen until the free money is gone, which maybe has finally come.)

"Pick React and stick to it" is the exact parallel to your "pick Sun and stick to it". Were you not there to see how often SunOS and Solaris updates broke things, too? But those updates were largely optional, and so are these. If you prefer React 15's class-based component model, you can pin the version and stick with it. You won't have access to new capabilities that rely on React 16 et cetera, but that's a tradeoff you can choose to make if it's worth your while to do so. You can go the other way if you want, too. The same holds true for other frameworks, if you use a framework at all. (You probably should, but if you can make a go of it starting from the Lions Book, then hey, have a blast.)

I agree that VC money is ultimately poison to the ecosystem and the industry, but that's a larger problem, and I could even argue that it's one which wouldn't affect JS at all if JS weren't fundamentally a good tool.

(To your edit: granted, and React, maybe and imo ideally plus Typescript, looks best situated to be on top when the whole thing shakes out, which I agree may be very soon. The framework-a-week style of a lot of JS devs does indeed seem hard to sustain outside an environment with ample free money floating around to waste, and React is both easy for an experienced dev to start with and supported by a strong ecosystem. Yes, led by Facebook, which I hate, but if we're going to end up with one de facto standard for the next ten years or so, TS/React looks less worse than all the other players at hand right now.)

> React is both easy for an experienced dev to start with and supported by a strong ecosystem.

I wouldn't say getting started with ReactJS is easy (or that it's properly supported). Each team that uses React within the same company uses a different philosophy (reflected in the design) and sometimes these flavors differ over time in the same team. We're back to singular "wizards" who dictate how software is to be built, while everyone else tinkers. It's a few steps from custom JS frameworks.

    The UHH is a fun read, yes, but the biggest real-world
    problem with the Unix Wars was cross-compatibility. 
    Your Sun code didn't run on Irix didn't run on BSD 
    and god help you if a customer wanted Xenix. 
    OK, you can draw some parallel here between 
    React vs. Vue vs. Zeit vs. whatever.

You made your point, proved yourself wrong, and then went ahead ignoring the fact that you proved yourself wrong.

>The UHH is a fun read, yes, but the biggest real-world problem with the Unix Wars was cross-compatibility. Your Sun code didn't run on Irix didn't run on BSD and god help you if a customer wanted Xenix. OK, you can draw some parallel here between React vs. Vue vs. Zeit vs. whatever

POSIX is a set of IEEE standards that have been around in one form or another since the 80s, maybe JavaScript could follow Unix's path there.

The existence of such a standard doesn't automatically guarantee compliance. There are plenty of APIs outside the scope of POSIX, plenty of places where POSIX has very under specified behavior, and even then, the compliance test suite doesn't test all of the rules and you still get tons of incompatibilities.

POSIX was, for the most part, not a major success. The sheer dominance of Linux monoculture makes that easy to forget, though.

Of course it doesn't guarantee compliance, but like all standards it makes interop possible in a predictable way, e.g. some tcsh scripts run fine under bash, but that's not by design. The inability or unwillingness of concerned parties to adopt the standard is a separate problem. This is why "posixly" is an adverb with meaning here.

This is slightly off-tangent, but as someone who has written production software on the front-end (small part of what I do/have done) in:

Vanilla -> jQuery -> Angular.js -> Angular 2+, React pre-Redux existence -> modern React -> Vue (and hobby apps in Svelte + bunch of random stuff: Mithril, Hyperapp, etc)

I have something to say on the topic of:

> "If you pick React, you're not getting hurt because Vue and React are incompatible, you're getting hurt because the React shit breaks and churns."

I find the fact that front-end has a fragmented ecosystem due to different frameworks completely absurd. We have Webcomponents, which are framework-agnostic and will run in vanilla JS/HTML and nobody bothers to use them.

Most frameworks support compiling components to Webcomponents out-of-the-box (React excepted, big surprise).




If you are the author of a major UI component (or library of components), why would you purposefully choose to restrict your package to your framework's ecosystem. The amount of work it takes to publish a component that works in a static index.html page with your UI component loaded through a <script> tag is trivial for most frameworks.

I can't tell people how to live their lives, and not to be a choosy beggar, but if you build great tooling, don't you want as many people to be able to use it as possible?

Frameworks don't have to be a limiting factor, we have a spec for agnostic UI components that are interoperable, just nobody bothers to use them and it's infuriating.

You shouldn't have to hope that the person who built the best "Component for X" did it your framework-of-choice (which will probably not be around in 2-3 years anyways, or have changed so much it doesn't run anymore unless updated)


Footnote: The Ionic team built a framework for the singular purpose of making framework-agnostic UI elements that work with everything, and it's actually pretty cool. It's primarily used for design systems in larger organizations and cross-framework components. They list Apple, Microsoft, and Amazon as some of the people using it in production:


No one uses them because SSR is either non-existent or clunky with them.

Ignoring a common use case when inventing something is a good way to get your shit ignored in turn. Which is what happened.

Web components aren't really there yet. They will be two or three years from now. Some time between now and then, I expect React will gain the ability to compile down to them, which shouldn't be too hard since web components are pretty much what happens when the React model gets pulled into core.

You can compile React to Webcomponents with community tooling, the core framework just doesn't support them:


By "aren't really there yet", what do you mean? If you mean in a sense of public adoption and awareness, totally agree.

If you mean that they don't work properly, heartily disagree. They function just as well as custom components in any framework, without the problem of being vendor-locked.

You may not be able to dig in to the internals of the component as well as you would a custom build one in your framework-of-choice, but that's largely the same as using any pre-built UI component. You get access to whatever API the author decides to surface for interacting with it.

A properly built Webcomponent is generally indistinguishable from consuming any other pre-built UI component in any other framework (Ionic built a multi-million dollar business of off this alone, and a purpose-built framework for it).

Very unlikely. Web components and React are trying to solve different problems, and the React team has repeatedly said this isn't going to happen.

> nobody bothers to use them

Here's the sad but unavoidable truth: the main purpose of Javascript currently is to keep Javascript developers employed.

Spoken like someone who's never seen what people perpetrate in, say, Java.

> Deno appears to be trying to become the Linux of Javascript

Deno always sounded more "like the Plan 9 of Javascript" personally to be honest. It seems to be better (yay for built-in TypeScript support! Though I have my reservations about the permission management, but that's another discussion) but perhaps not better enough (at least just yet) to significantly gain traction.

The permissions management is a little tricky to think about at first, but once you get the hang of it I think it's actually quite nice. Setting strict permissions on CLI tools help to ensure that the CLI isn't doing anything nefarious when you're not looking (like sending telemetry data). Since this CLI has --allow-run, I can also have it execute a bin/server script that _does_ have network and read/write permissions, but only in the current app directory.

The problem I saw was how quickly you need to open up the permissions floodgates. I saw them live-demo a simple http server, and to do something as basic as that you need to open up full file system and network access. So if you’re doing anything like setting up a server (i.e. one of the core things one does when using a server-side scripting language), you’re back to square 1.

Ah never mind, I see they now have finer grained scopes. That should help.

Deno was always Typescript-first fwiw

I have doubts about how this could possibly work. The idea is you pull a .ts file directly, right? Then your local ts-in-debo compiles that to extract typedefs for intellisense/etc and the JS. What happens when it was created for a different version of typescript than what you’re running? Or if it was created targeting different flags that what you’re using? This will cause lots of problems:

I’m running my project with ts 3.6. Library upgraded to 3.7 and adds null chaining operators. Now my package is broken. In node land, you compile the TS down to a common target before distributing so you don’t have this problem.

Similar, I’m using 3.8 and package upgrades to 3.9 and starts using some new builtin types that aren’t present in my TS. Now my package is broken. Previously you’d export a .d.ts targeting a specific version and again not have this problem.

Or, I want to upgrade to 3.9 but it adds some validations that cause my dependencies to not typecheck, now what?

Or, I’m using strictNullChecks. Dependent package isn’t. Trying to extract types now throws.

I’ve brought these all (And many other concerns) up to the deno folks on numerous occasions And never gotten a answer more concrete than “we’ll figure out what to do here eventually”. Now 1.0 is coming, and I’m not sure they’ve solved any of these problems.

> I’m running my project with ts 3.6. Library upgraded to 3.7 and adds null chaining operators. Now my package is broken.

Isn't this similar to not upgrading node and using an updated version of an npm package that calls a new function added to the standard library? All npm packages have a minimum node version, and similarly all deno code has a minimum deno version. Both use lockfiles to ensure your dependencies don't update unexpectedly.

> Or, I’m using strictNullChecks. Dependent package isn’t.

This definitely sounds like a potential problem. Because Deno enables all strict checks by default, hopefully library authors will refrain from disabling them.

Node updates much less frequently than TS, so even if it was a problem before, it’s more of a problem now.

Rephrase: people use new TS features much more often than they use new Node features.

That might be true in general, but I seem to run into problems with the two with about equal frequency. One of the recent ones I ran into with node was stable array sort.

Yes, npm package maintainers spend a lot of time on node version compatibility. Here is a quote from prettier on their recent v2 release:

> The main focus should be dropping support for unsupported Node.js versions.


On the other hand, trying to setup a typescript monorepo with shared/dependent projects is a huge pain since everything needs to be transposed to intermediary JS that severely limits or breaks tooling.

Even TS project references make assumptions about the contents of package.json (such as the entry file), or how the compiler service for VsCode preloads types from @types/ better than for your own referenced projects, which sadly ties TS to that particular ecosystem.

Language version compatibility is a good point, but perhaps TSC could respect the compiler version and flags of each package's tsconfig.json, and ensure compatibility for minor versions of the language?

Since I enjoy working in TS I'm willing to wait it out as well, the pros far outweigh the cons. Now that GitHub/MS acquired NPM, I have hopes that it will pave the way to make TS a first-class citizen, though I don't know if Deno will be part of the solution or not.

> TSC could respect the compiler version and flags of each package's tsconfig.json

That’s the problem - there is no tsconfig.json. You’re only importing a single URI.

I see. While I don't know the details, it seems it would promote the use of "entry/barrel" files once again.

> running code straight from wherever on the Internet it happens to live today.

This, exactly this. Young me thought this was a point of the whole thingy we call Internet.

And exactly that is what I like about QML from Qt. Just point to a file and that's it.

Go tried it; went over like a lead balloon. Theory; lead balloons don't fly anywhere.

How is it a lead balloon? Go got super popular in the period before /vendor and Dep (later modules). Yes, people wanted and got versions too, but the URL part stayed. ISTM, they had a Pareto optimal 20% piece of the puzzle solved and bought them selves time to solve the other 80% years later.

Go still identifies packages by URL. The recent modules feature just added the equivalent of lockfiles like npm, yarn, cargo, etc. It also added some unrelated goodies like being able to work outside of $GOPATH.

> Deno appears to be trying to become the Linux of Javascript, through the innovative method of abandoning the concept of "package" entirely and just running code straight from wherever on the Internet it happens to live today.

I really like Deno for this reason. Importing modules via URL is such a good idea, and apparently it even works in modern browsers with `<script type="module">`. We finally have a "one true way" to manage packages in JavaScript, no matter where it's being executed, without a centralized package repository to boot.

Then again, this broke a package that, by its very nature, isn't running in production. And the problem was solved within three hours.

So I'm not sure how much everything-used-to-be-great-nostalgia is justified here.

Someone rolls out code where a serious bug fell through QA cracks, and appears to be breaking a mission-critical path. Your biggest client is on the phone screaming FIX IT NOW. Three hours is an eternity.

Screaming "FIX IT NOW" because bootstrapping a new React app isn't working? Who, what, when, where?!

You roll back one version. Problem is fixed in thirty seconds.

Let’s add: appears to be breaking mission critical path that also slipped through cracks in QA. Mistakes happen, run CI/CD before getting to the mission critical path.

My development environment is my production environment.


I think you missed my point so let me clarify: If your job is to develop software, then your computer is your production environment. It's where you run your production - your development. This is hopefully separate from where your customers runs development.

Only as much as it ever is. That's why I'm making fun of it.

I remember the beginning of React (before Webpack) when server compilation looks fine and that magic works as <script>react.js</script> in browser. This looks like new era where HTML is fixed. But no, we have 15 standards now. Everything is finished when I found Webpack 3-line module with 20 lines Readme description. We have 1000 modules and 1000 weak points after that. React has x1000 overhead

Any package and package manager has hot points:

- no standards, api connection issues (different programming styles and connection overhead)

- minor version issues (just this 1 hour bug 0-day)

- major sdk issues (iOS deprecate OpenGL)

- source package difference (Ubuntu/CentOS/QubesOS need a different magic for use same packages)

- overhead by default everywhere that produce multiple issues

I'm a developer, but I'm also on-call 24/7 for a Node.js application. The number of people here saying "this is why you don't use dependencies" or "this is why you vendor your deps" is frustrating to see. No one _but no one_ who has managed complex enough systems will jump on the bandwagon of enterprise-ready, monolithic and supported over something like Node.js. I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.

There are trade-offs, absolutely. Waiting on a vendor to fix a problem _for months_, while sending them hefty checks, is far inferior to waiting 3 hours on a Saturday for a fix, where the actual issue only effects new installations of a CLI tool used by developers, and can trivial be sidestepped. If anything, it's a chance to teach my developers about dep management!

I'm positive my stack includes `is-promise` about 10 times. And I have no problem with that. If you upgrade deps (or don't) in any language, and don't have robust testing in place, the sysadmin in me hates you - I've seen it in everything from Go to PHP. There is no silver bullet except pragmatism!

>I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.

Sadly, I dream of doing this very thing every day. I'm at that notch on the thermometer just before "burned out". I love creating a working app from scratch. However, I'm so sick of today's tech. The app stores are full of useless apps that look like the majority of other apps whose sole purpose is to gather the user's personal data for monetizing. The web is also broken with other variations of constant tracking. I'm of an age where I remember time before the internet, so I'm not as addicted as younger people.

Send me a message if you want - I'd love to share what I'm building with you, as it is intended to resolve that exact feeling. I sympathize entirely.

If it's a log cabin, just tell me where to be, and I'll show up with hammers and saws!

There’s no silver bullet you’re absolutely right, but does that mean there isn’t room for improvement? Or that you shouldn’t try? Dropping all dependencies is extreme for sure but to argue against something as simple as vendoring is a bit odd.

You’re correct - there is room for improvement. The “npx” tool is a easy place to start! And absolutely agreed dropping dependencies is extreme and vendoring not so much - but in my experience vendoring often means “don’t ever touch again until a bad security shows up”. I was being a little bit too snarky in my comment tho, absolutely :)

Vendoring causes more problems than it solves. There are plenty of things that could be improved about the node ecosystem, but a lot of the criticism isn't based on logic; there seems to be a large population on HN who just inherently hate large numbers of dependencies and will grasp for any excuse to justify that hate.

Funny, I run an “enterprise” stack almost entirely made of Java. I wouldn’t trade it for NodeJS for the world.

Making upstream changes indeed would be very, very hard. But I never have to make upstream changes because they’ve spent quite a large amount of effort on stability.

I'm also making enterprise-grade software with quite a few external dependencies. I had to email the developers of the biggest dependency multiple times because of bugs but they were all fixed within a few weeks in a new patch release. They also went out of their way to provide me with workarounds for my problems. In the NPM world you are on your own.

Why? You can email package maintainer just as well, or better yet - open an issue on GitHub.

Sure, but JavaScript and J2EE aren't the only options. You can use a language with more built-in functionality, reduce the use of unnecessary external libraries, and/or limit those libraries to ones from trusted sources.

I honestly have no idea if you prefer Node.js or J2EE after reading this comment.

They mean that they would only trade Node.js for J2EE the day they can also quit (so that they don't have to use J2EE).

J2ee was a hell, but Java EE was quite decent!

Pragmatism - do programming to solve real life problems rather than create a broken ecosystems which requires constant changes (and learning just to be on top of them) to fix a bad design

> I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.

I think the snark is obscuring the point of this comment.

And the source code of the library is:

   function isPromise(obj) {
     return !!obj && (typeof obj === 'object' || typeof obj === 'function') && typeof obj.then === 'function';

Here's my off-the-cuff take that will not be popular.

A function like this should be a package. Or, really, part of standard js, maybe.

A) The problem it solves is real. It's dumb, but JS has tons of dumb stuff, so that changes nothing. Sometimes you want to know "is this thing a promise", and that's not trivial (for reasons).

B) The problem it solves is not straightforward. If you Google around you'll get people saying "Anything with a .then is a promise' or other different ways of testing it. The code being convoluted shows that.

Should this problem be solved elsewhere? Sure, again, JavaScript is bad and no one's on the other side of that argument, but it's what we have. Is "just copy paste a wrong answer from SO and end up with 50 different functions in your codebase to check something" like other languages that make package management hard so much better ? I don't think so.

No. There is no reason why it should be a package by itself. It should be part of a bigger util package, which which is well maintained, tested, and with many maintainers actively looking at it, with good processes, such as systematic code reviews, etc.

At work, our big webapp depended at some point indirectly on "isobject" "isobj" and "is-object", which were all one liners (some of them even had dependencies themselves!!). Please let's all just depend on lodash and it will actually eventually reduce space and bandwith usage.

Yep, in Java-land this would be in an Apache Common's (or Guava, etc) module with dozens and dozens of other useful functionality.

Yeah, but the question is how far should we go with that. Should we do :

    const isFalsy = require("is-falsy");
    const isObject = require("is-object");
    const isFunction = require( "is-function" );
    const hasThen = require( "has-then" );

    function isPromise(obj) {
      return !isFalsy(obj) && ( isObject(obj) || isFunction(obj) ) && hasThen( obj );
Just because the code line is more than 50 characters, doesn't mean that we need a new library for that.

All of those can be pretty much be handled natively, and obviously. They're all primitive

isFalse would be != isObject would use typeOf isFucntion would use typeOf

Where a library becomes helpful is when you have:

* A real problem (none of those are real problems, and the npm packages for them are essentially unused jokes)

* A solution that is not intuitive, or has a sharp edge, or requires non-obvious knowledge, or does not have a preexisting std approach

Checking for a promise, given the constraints of having multiple types of promises out in the world, falls into both of those. Checking if something is falsey, when Javascript provides !, does not fall into either.

I think all of the above might already be libraries on npm. From what I remember, npm has isInteger, isPositive, is-odd, is-even.

All of the packages you mentioned are maintained by the same guy.

Have you seen his twitter? It's incredibly cringey. I don't understand how someone could be so arrogant to claim millions of companies use his software, when his software is isFalse. Not to mention his hundreds of packages that literally just output an emoji.

Reminds me of Dr. Evil’s monologue about his Father making ridiculous claims about inventing the question mark.

isFalsy is just “!”; I don't think we need a new library for a more verbose way to express a one-character unary operator, no, nor does it meet the standard of “The problem it solves is not straightforward” proposed upthread.

I'm not surprised it exists (and literally is just a more verbose, indirect way to invoke “!” that nevertheless is a 17 sloc module with a bunch of ancillary files that has one direct and, by way of that one, 17 second-order and, while I didn't check further, probably an even more ridiculous number of more distant, transitive dependendencies.)

I'm just saying it's neither necessary nor consistent with the standard for when a library is a good idea proposed upthread, so suggesting it as part of an attempt at reductio as absurdam on that standard is misplaced.

> 0 Dependents

if a tree falls in the woods...?

> Weekly Downloads > > 0

as of 2020-04-26T00:39+00:00

>Or, really, part of standard js, maybe.

I think this would be the solution. I feel like a lot of the NPM transitive dependency explosion just comes from the fact that JavaScript is a language with a ton of warts and a lack of solid built-ins compared to e.g. Python. Python also has packages and dependencies, but the full list of dependencies used by a REST service I run in production (including a web framework and ORM) is a million times smaller than any package-lock.json I've seen.

This is correct. I post the same thing every time one of these JS dependency hell issues pops up, but it's the case because it's true: The problem is the lack of a standard library. It's not that people don't know how to write a left-pad function, it's that it's dumb to rewrite it in every project and then remember what order you put the arguments in, etc. So people standardize, but they're standardizing on millions of different little packages.

I think the effort that goes into all the JS syntax and module changes would be better put into developing a solid standard library first.

There's a rich standard library, you don't need a package to left-pad.


    x instanceof Promise 
It works for standard promises, sure there are non standard promises, ancient stuff, that to me shouldn't be used (and a library that uses them should be avoided). So why you need that code in the first place?

Also that isPromise function will not work with TypeScript, imagine you have a function that takes something that can be a promise or not (and this is also bad design in the first place), but then you want to check if the argument is a Promise, sure with `instanceof` the compiler knows that you are doing, otherwise not.

Also, look at the repo, a ton of files for a 1 line function? Really? You take less time to write that function yourself than to include that library. But you shouldn't have to write that function in the first place.

Your implementation is broken even if everything uses native Promises. I don't know how many times this exact thread needs to happen on HN (as it has many times before) until people realize their "no duh" implementations of things are actually worse than the thing they're criticizing.

Make an iframe.

In the iframe:

    > window.p = new Promise(() => {});
From the parent window:

    > window.frames[0].p instanceof Promise
Congrats! Your isPromise function was given a Promise and returned the incorrect result. The library returns the correct result. Try again!

In case someone else is also confused by this, it seems that instanceof checks whether the objects prototype matches, and these prototypes are not shared across different contexts, which iframes are [0]. (Though I would still like to know why it works like this.)

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

For security reasons - you can modify the prototypes and you wouldn't want iframes to inherit that.

No, it was not given a Promise. It was given a foreign object from another window. If you want to inspect another window you should not be reusing code that is designed for single threaded operations. Instead, have a layer that translates, serializes, or explicitly defines an interface that the objects we are dealing with are foreign and need to be transformed. Then the abstraction implementation details of dealing with multiple windows become a concern of a single layer and not your entire codebase. Implicitly and magically treating a foreign window as this window, will fail in many subtle and unknown ways. The "brokenness" you mention is not in that implementation, it is correctly breaking, telling you that what you are doing is wrong, then you try to bypass the error instead of fixing your approach.

For foreign-origin iframes, that's exactly what people do using `postMessage`. But for same-origin iframes there's no need since you can access the iframe's context directly. So people can (and do) write code exactly like this that accesses data directly.

And it was given a Promise. You just shouldn't use instanceof in multi-window contexts in JavaScript. This is why built-ins like `Array.isArray` exist and should be used instead of `arr instanceof Array`. Maybe you'd prefer to write to TC39 and tell them that `Array.isArray` is wrong and should return false for arrays from other contexts?

There's no use jumping through hoops to avoid admitting that OP made an error. They were wrong and didn't think of this.

GP's comment screams XY problem which seem to be increasingly common these days.

If you think pointing out a bug due to an edge case someone didn't think of is the XY problem, I'm afraid you don't know what the XY problem is.

The problem was to get the promise out of the iframe when you shouldn't do this directly in the first place.

This literally is an XY problem: "I need to do A but it's giving me bad results, what do I need to add?" - "Don't use A, it's bad practice. Use B instead and keep using built-in tools instead of hacking something together" In this case use instanceof instead of is-promise because it's a hack around the actual problem of getting objects out of a different context that was explicitly designed to behave this way.

I'm afraid that you don't know what an XY problem is.

JavaScript developers always seem to think they are the smart ones after their 6 weeks of some random bootcamp and then you end up with some crap like NPM where a single line in a package out of hundreds maintained by amateurs can break everybody's development environment.


Yikes, please don't break the site guidelines like this.


I didn't even mean you but the general JS community but if you want to think I did, okay, feel free to do so.

Though, let's also appreciate just how niche that case is. I'd be surprised if more than 0.5% of the JS devs reading this will ever encounter that scenario where they are reaching across VMs like that in their life.

`obj instanceof Promise` and `typeof obj.then === 'function'` (is-promise) are much different checks. Frankly, I don't think either belongs in a library. You should just write that code yourself and ponder the trade-offs. Do you really just want to check if an object has a then() method or do you want to check its prototype chain?

TypeScript supports 'function f(x: any): x is T' as a way to declare that if f returns true, x may pass as type T


This package has an index.d.ts file that utilizes exactly this.

1) I'm not defending the implementation of is-promise. I don't care to, I'm not a javascript developer.

2) > sure there are non standard promises, ancient stuff, that to me shouldn't be used

If you're building a library, or maintaining one that's been built over many years, you can't easily make calls like that.

> If you're building a library, or maintaining one that's been built over many years, you can't easily make calls like that.

Well, you can, and in the JS ecosystem you'll often find cases where there are two libraries (or two broad classes of libraries) for a certain function that make different choices, one of which makes the simple, modern choice that doesn't support legacy, and one that does the complex, messy thing necessary to deal with legacy code, and which you use depends on your project and it's other constraints.

OK, then the legacy library can't easily make that choice. I'm not saying every single javascript developer should be accepting async or sync callbacks, just that some libraries are choosing to do that for legitimate reasons.

Are you saying that this isPromise package will not play well with TypeScript? One of those files (index.d.ts) solves the TypeScript problem using type predicates. TypeScript WILL know that the object is a promise if it returns true.

This function should absolutely NOT be a package. The problem is that JS has a very minimal standard library and despite tons of money going into the system, nobody's had the good sense to take a leadership role and implement that standard. In other languages you don't need to include external packages to determine the types of objects you're dealing with, or many other things.

And there's an interesting discussion to be had if it shouldn't be one of those snippets that everyone copies from Stackoverflow instead. And how much trouble in other ways that alternative has caused.

I don't think it should be a package.

One-liners without dependencies like this should live as a function in a utility file. If justification is needed, there should be a comment with a link to this package's repo.

What's the difference between a utility file and a package? That seems like a distinction without a difference to me.

If you use the same one liners in more than one project and you copy that utility file over, the line gets even fuzzier.

The utility file will never be updated and break your build without you doing it yourself.

Also you can just read the code the same way you read any other code. And since it's in your codebase and git diffs, you will read it.

Because the implementation detail of is-promise actually is important. It just checks if an object has a .then() method. So if you use it, it's just as important that you know the limitation.

Not everything needs to be swept under the rug.

Also: The utility file will never be updated and fix existing issues within the utility itself (unless you look up the package and diff it yourself). It's a trade-off.

As the commenter who suggested keeping it in a utilities file, I'd say that the trade-off is heavily weighted to not importing it as a package.

When you cribbed the code you should have completely understood what exactly the package was doing, and why, and known what issues it would have had. Since it's a one-liner, it is transparent. Since it is without dependencies, it is unlikely to fail on old code. So it's unlikely to have existing issues and unlikely to develop new issues.

Of course, if you end up using new features of the language in your code, it may fail on that, but the risk old stuff failing should have already been factored in when you decided to upgrade. In fact, the one-liner solves this better since you decide the pace of adaptation of your one-liner to the new features, not the package maintainer.

That's the trade-off I would most likely take in the "isPromise" case. But the opening question was a generic one ("What's the difference between a utility file and a package"), so the answer should reflect both sides.

I'd say that it should rather be a part of the type system. Some kind of `obj isa Promise` should be the way to do this, not random property checks. But that's JS...

The thing is that there is the Promise "class", which is provided by the environment, but there is a interface called PromiseLike, which is defined as having a method called then that takes one or two functions. Now, JS doesn't have nominal typing for interfaces, so you have to do "random property checks".

Typescript partially solves that by declaring types, but if you have a any variable, you still need to do some probing to be able to safely convert it to a PromiseLike, because TypeScript goes to great lengths to not actually produce physical code on its output, attempting to be just a type checker.

Perhaps if TS or an extension allowed "materializing" TS, that is `value instanceof SomeInterface` generated code to check for the existence of appropriate interface members, this could be avoided, but alas, this is not the case.

shouldn't it then be called is-promise-like? Also, if you're being loose about it anyways, can't you simply just go for `if (obj && typeof obj.then == 'function')` and call it a day? I'd say that's short enough to include your own version and not rely on a package for.

I think that module over complicates it as it is, and most people don't need that level of complication in their code.

> Perhaps if TS or an extension allowed "materializing" TS, that is `value instanceof SomeInterface` generated code to check for the existence of appropriate interface members, this could be avoided

It's not perfect and a bit of a bolt-on, but io.ts works reasonably well in this area:


In theory `x instanceof Promise` would work, but the reason for this package is that there are many non-standard Promise implementations in the JS world.

It wouldn't work even if everything were native – see my reply above.

That applies for browsers, yes (though I'd argue is a rare edge-case), but create-react-app is a Node.js application.

create-react-app isn't even using is-promise directly. It's several hops in the dependency graph away.

Promises were not always part of the standard and for many years were implemented in user space, by many different implementations. Using duck typing like this was the only way to allow packages to interact with each other, as requiring an entire stack to say only use Bluebird promises is not realistic at all.

I'm totally with you on this. It's dumb that this is a problem but it is actually a problem.

I think you're on the right track. We all (I hope) agree that stuff like this should be standardized. But that's not the same as "should be a package".

At the very least, W3, or Mozilla Foundation, or something with some kind of quasi-authority should release a "JS STD" package that contains a whole bunch of helper functions like this. Or maybe a "JS Extras" package, and as function usage is tracked across the eco-system, the most popular/important stuff is considered for addition into the JS standard itself.

Having hundreds of packages that each contain one line functions, simply means that there are hundreds of vectors by which large projects can break. And those can in turn break other projects, etc.

The reason, cynically, that these all exist as separate packages, is because the person who started this fiasco wanted to put a high a download count as possible on his resume for packages he maintains. Splitting everything up into multiple packages means extra-cred for doing OSS work. Completely stupid, and I'm annoyed nobody has stepped up with a replacement for all this yet.

A function like this should not be something that anyone even thinks of writing or using.

In properly designed languages, values have either a known concrete type, or the interfaces that they have to support are listed, and the compiler checks them.

Even in JavaScript/TypeScript, if you are using this, you or a library you are using are doing it wrong, since you should know whether a value is a promise or not when writing code.

This function is most likely an artifact of before promises got standardized. One way promises took off and became so ubiquitous is different implementations could interop seamlessly. And the reason for that is a promise was defined as 'an object or function having a then method which returns a promise when called'.

Doesn't excuse the JS ecosystem and JS as a whole, which truly is a mess. But there's a history behind these things.

I think the point of the comment is: you should not be testing for this at all.

If your API works with promises, call .then() on what is handed to you. That's it. Don't make up emergent, untestable behavior on the spot.

You need to do this test if you are creating a promise implementation. That was my point, there is a reason code like this exists.

Why would an implementation need to test for it?

ISTM that a framework may need to test for promiseness if it calls promises and functions differently, but it can and should be done as a utility in the framework, not as a separate package.

I agree with that. I have no idea why it’s in a separate package. But I can say that about many packages :).

It’s possible to just treat everything as a promise by wrapping results in Promise.resolve() but that can have performance implications that some franeworks might want to avoid by only going down the promise route when they have to.

For promise implementations, If the callback to then() returns a promise, the promise implementation detects that and resolves that promise behind the scenes: http://www.mattgreer.org/articles/promises-in-wicked-detail/...

> Or, really, part of standard js, maybe.see:

It's a part of node, at least: https://nodejs.org/docs/latest-v12.x/api/util.html#util_util...

This will work for a standard Promise, which is great, but not for weirdo made up promises. It also was released, I think, in 2018.

It's one thing if you own the entire codebase, but if you're building a popular, multiple-years-old library/framework, you can't make the same assumptions.

I'm only finding about it today TBH. Usually I go for what this library does:

  function isPromise(obj) {
    return typeof obj?.then === 'function'

Shouldn't a js framework exist that includes these static basic if checks in the core and offers them as a buildin method? Why load this as an external package, why not copy the code and maintain locally?

And it doesn't even check if it's a Promise. It's violating it's own naming contract. At least it should be called: isPromiseLike? To check if something is actually a Promise all you need to do is a `foo instanceof Promise`.

That won't work across window boundaries, since each window environment gets its own distinct version of Promise.

Then instanceof will break for all native objects. Who writes code that checks instances across window boundaries? This is flawed beyond the idea of how to properly check an instance, it's bad architecture - the result? This post, and probably more subtle bugs surfacing along the way.

how does that make sense in any universe. Just because I have a function named "then" does not mean that my object is a promise. Maybe "then" is the name of a domain thing in my project, for instance a small DSL or something like that. arghhhhhh !

objects with a .then(...) methods are treated as though they have Promise semantics by the language

See Promise.resolve https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Consider it a special function like "__init__" in Python. I think this is one of the problems with duck typing, existence of a public function introduces possible name collision onto the whole codebase.

It's almost as if treating a dynamically typed interpreted language as if it had static types like C++ is fundamentally broken.

Does anyone know if it really needs the !!obj&& at the start? Isn't that redundant with checking that the type is either "object" or "function"?

And is there a reason to use '!!' inside a conditional? Wouldn't obj&& do basically the same thing?

Unfortunately, that's not redundant, because `typeof null` returns "object":


    !!null && false   === false
    null && false     === null

Irony is there's a simpler way to do it, making the library even less necessary. `const isPromise = x => Promise.resolve(x) === x`

It didn't break because of the source, it broke because of all the packaging/module bullshit around it. It seems the Javascript ec(h)osystem has firmly come down on the philosophy of "make modules very easy to use, but very difficult to make (correctly)".

The predictable explosion in dependency trees has caused the predictable problems like this one. I feel I much prefer the C/C++ way of "modules are easy to make, but difficult to use".

A nice one line argument for type safety.



It's not useful, interesting, or accurate to stratify people this way. You have no idea what someone's intelligence level or background is based on their usage of JS or C.

I loathe JS, but one of the best devs I know likes it. People's mileage varies.

For me personally, there's much more money in easy React products than making games. I'm not a great dev, but I'd be doing this same work even if I were.

I sincerely hope you're being sarcastic here.

Actually VHDL / Verilog programmers are on top of the food chain.

I don't know much about VHDL, but there seems to be a non-scientifically-measured negative correlation between the quality of candidates I've interviewed claiming VHDL experience vs. those without.

And is that reflected by their compensation, or ... ?

They will eat you.

The “food chain?” What do you mean by that?

Wow just wow. So here's your new Promise object:

class World { then () { return 0; } } isPromise(new World) // true

If there really isn't a safe and better way to tell if an object is an instance of Promise…then color me impressed.

There are custom Promise implementations (for reasons), such as bluebird.js. If you're supporting legacy browsers, there will be no standard Promise object. So the simplest way to check for the Promise contract is the code posted. But yes, in an ideal world, one would be able to just do `promise instanceof Promise`.

These cases should really be handled at the compilation/transpilation level, since there is one, and users should just write latest generation JavaScript without these concerns.

I mean, if you have to assume deliberately adversarial action on the part of your own codebase, you may have worse problems than having to duck-type promises.

A class with a `then` method isn't a rare thing that could only come up adversarially. Below I've linked two example from the rust std-library (I just chose that language because the documentation is really easy to search for objects which have a method name then). I think we can be sure that both booleans, and the "less than, equal to, or greater than" enum are not in fact promises.



Cypress.io has chainable .then functions but they are not await-able and the documentation clearly states they are not promises and cannot be treated as such. It’s a bad idea, but it is out there.

I haven't used Cypress, but looking at its docs, I don't know that I'd agree its use of "then" is all that bad. I agree they'd have done better to find a different name, but this at least seems like the least possible violation of least surprise if they are going to reuse the name.

At the same time, it is intensely wild to me that their "then" is an alternative to their "should", which apparently just re-executes the callback it's given until that callback stops throwing. If your tests require to be re-run an arbitrary and varying number of times in order to pass, you have problems that need to be dealt with in some better way than having your test harness paper over them automatically for you.

What do Rust idioms have to do with Javascript?

The language has nothing to do with it. The point is just that the name "then" is a perfectly common method name.

If a small standard library is using it for things that aren't promises, you can bet your ass that there are javascript libraries using it for things that aren't promises.

Like I said, I just chose to look at rust first because it's documentation has a good search bar.

The language has everything to do with it, because the language is the locus of practice. Rust practice is whatever it is, and is apparently pretty free with the use of "then" as a method name, which is fine. Javascript practice isn't the same as Rust practice, and Javascript practice includes a pretty strong norm around methods named "then".

That's why the next time I run into such a method, that doesn't belong to a promise and behave the way a promise's "then" method does, will be the first time I can remember, despite having worked primarily or exclusively in Javascript since well before promises even existed.

I'm sure there is an example somewhere on NPM of a wildcat "then", and that if you waste enough of your time you can find it. So what, though? People violate Rust idioms too from time to time, I'm sure. I doubt you'd argue that that calls Rust idioms themselves into question. Why does it do so with Javascript?

It doesn't call into question the idioms of either language. It does call into question the idea of programmatically deciding whether or not something is a promise based on the assumption that the idiom was followed.

People bounce around between languages, especially to javascript. An expert javascript dev might not call things "then" but the many dabblers might. Going back to the original point this is a footgun, not only an avenue for malicious code to cause trouble.

So your point ultimately is that duck-typing isn't ideal? I mean, I agree, but I'm not sure where this gets us that we weren't before.

I don't actually mind duck-typing.

My primary point is just that you are mistaken when claiming that this bug could only be surfaced by malicious code.

My secondary (somewhat implicit) point is that having an "is-promise" function is a mistake when there is no way to tell if something actually is or is not a promise. This library/function name is lying to the programmers using it about what it is actually capable of, and that's likely to create bugs.

I mind duck typing! That's why I'm so fond of Typescript, where everything that shows up where a promise should be is reliably either instanceof Promise, or instanceof something that implements Promise, or a compile-time error.

Absent that evolved level of tooling, and especially in an environment still dealing with the legacy of slow standardization and competing implementations that I mentioned in another comment, you're stuck with best effort no matter what. In the case of JS and promises, because of the norm I described earlier in this thread, best effort is easily good enough to be going on with. It's not ideal, but what in engineering practice ever is?

So, I mind poorly implemented duck typing, I also mildly mind dynamic typing, but in principle I think static duck typing could be not bad.

With javascript promises in particular, the duck typing suffers from this unfortunate fact that you can't easily check if something can be awaited-upon or not. I don't think I really care if something is a promise, so long as I can do everything I want to to it. So I view the issues here as this function over-claiming what it can do, the limitation on the typesystem preventing us from checking the await-ability of an object, and the lack of static type checking. None of those are necessitated by duck typing.

I disagree that you're stuck with this best-effort function. It's perfectly possible to architect the system so you never need to query whether or not an object is a promise. Given the lack of ability to accurately answer that question, it seems like the correct thing to do. At the very least I'd prefer if this function was called "looks-vaguely-like-a-promise" instead of "is-promise".

Now we're kind of just litigating how "is-promise" is used in CRA, or more accurately in whichever of CRA's nth-level dependencies uses it, because CRA's codebase itself never mentions it.

I don't care enough to go dig that out on a Saturday afternoon, but I suspect that if I did, we'd end up agreeing that whoever is using it could, by dint of sufficient effort, have found a better way.

On the other hand, this appears to be the first time it's been a significant problem, and that only for the space of a few hours, none of which were business hours. That's a chance I'd be willing to take - did take, I suppose, in the sense that my team's primary product is built on CRA - because I'm an engineer, not a scientist, and my remit is thus to produce not something that's theoretically correct in all circumstances, but instead something that's exactly as solid as it has to be to get the job done, and no more. Not that this isn't, in the Javascript world as in any other, sometimes much akin to JWZ's "trying to make a bookshelf out of mashed potatoes". But hey, you know what? If the client only asks for a bookshelf that lasts for a minute, and the mashed potatoes are good enough for that, then I'll break open a box of Idaho™ Brand I Can't Believe It's Not Real Promises and get to work.

I grant this is not a situation that everyone finds satisfactory, nor should they; the untrammeled desire for perfection, given sufficient capacity on the part of its possessor and sufficient scope for them to execute on their visions, is exactly what produces tools like Typescript, that make it easier for workaday engineers like yours truly to more closely approach perfection, within budget, than we otherwise could. There's value in that. But there's value in "good enough", too.

This is a promise as far the language is concerned (and `is-promise` package uses the same definition as the language) - it's sufficient for an value to be an object and to have a `then` property that is callable. For instance, in the following example, the `then` method is being called.

    (async () => ({
        then() {

Or just:

    const p = {then: () => 0}

Well, an object with a then() method is a promise.

Promise.resolve({then: () => console.log('called')})

Promises autoflatten since you can't have Promise<Promise<T>>, so you'll see that this code prints 'called'.

According this this library, maybe. According to the specification, no: https://promisesaplus.com/

Your code is indeed an example of a promise. A stupid one, since it never “resolves”, but it’s a promise.

I am one of the maintainers of a popular Node-based CLI (the firebase CLI). This type of thing has happened to us before.

I think the real evil here is that by default npm does not encourage pinned dependency versions.

If I npm install is-promise I'll get something like "^1.2.1" in my package.json not the exact "1.2.1". This means that the next time someone installs my CLI I don't know exactly what code they're getting (unless I shrinkwrap which is uncommon).

In other stacks having your dependency versions float around is considered bad practice. If I want to go from depending on 1.2.1 to 1.2.2 there should be a commit in my history showing when I did it and that my CI still passed.

I think we miss the forest for the trees when we get mad about Node devs taking small dependencies. If they had pinned their version it would have been fine.

That’s still the fault of the package developer. “^1.2.1” means “any version with a public API compatible with 1.2.1”, or in other words “only minor versions”.

The whole point of semantic versioning is to guarantee breaking changes are expressed through major versions. If you break your package’s compatibility and bump the version to 1.2.1 instead of 2.0.0 then people absolutely should be upset.

Allowing any version drift of dependencies at all means that if you don’t check in and restore using the package lock file, you cannot have reproducible builds. The package lock files are themselves dependent on which package restore tool you are using (yarn vs npm vs ...) it’s also much too ambitious to believe that all packages in an ecosystem will properly implement semver. There may even be times where a change doesn’t appear to be breaking to the maintainer but is in actuality. For example, suppose a UI library has a css class called card-invalid-data and wanted to rename to card-data-invalid. This is an internal change since it is their own css, but could break a library that overrode this style or depended on this class. I would consider this a minor version but it could still cause a regression for someone.

> Allowing any version drift of dependencies at all means that if you don’t check in and restore using the package lock file, you cannot have reproducible builds.

This is the germane point in this incident.

The parent comment mentions that SemVer "guarantee[s] breaking changes are expressed through major versions". This is a common misperception about SemVer. That "guarantee" is purely hypothetical and doesn't apply to the real world where humans make mistakes.

The OP `is-promise` issue is an example of the real world intruding on this guarantee. The maintainers clearly didn't intend to break things but they did because everybody makes mistakes

Which points to the actual value proposition of SemVer: by obeying these rules, consumers of your package will know your _intention_ with a particular changeset. If the actual behavior of that changeset deviates from the SemVer guidelines (e.g. breaking behavior in a patch bump), then it's a bug and should be fixed accordingly.

Back to the parent's point about locking dependency version— I would add that you should also store a copy of your dependencies in a safe location that you control (aka vendoring) if anything serious depends upon your application being continually up and running.

I think you might be misunderstanding the above comment. The default behavior of `npm i <package>` is to add `"<package>": "^1.2.1"` _not_ `"<package>": "1.2.1"`. The point the commenter was trying to make is that the tool itself has a bad default which makes it easy to make mistakes. I would go so far as to argue that when `npm i` does not have the behavior a user would expect from a package manager in that regard.

And likewise, I think the point of that above comment is that such a change in default behavior wouldn't be necessary if package authors actually obeyed semantic versioning.

That is: "^1.2.1" shouldn't be a bad default relative to "1.2.1"; you generally want to be able to pull in non-breaking security updates automatically, for what I hope are obvious reasons, and if that goes sideways then the blame should be entirely on the package maintainer for violating version semantics, not on the package/dependency manager for obeying version semantics.

I don't have much of an opinion on this for Node.js, but the Ruby and Elixir ecosystems (among those of many, many other languages which I've used in recent years) have similar conventions, and I don't seem to recall nearly as many cases of widely-used packages blatantly ignoring semantic versioning. Then again, most typically require the programmer to be explicit about whether or not to allow sub-minor automatic version updates for a given dependency, last I checked (e.g. you edit a configuration file and use the build tool to pull the dependencies specified in that file, as opposed to the build tool itself updating that file like npm apparently does).

> If I npm install is-promise I'll get something like "^1.2.1" in my package.json not the exact "1.2.1". This means that the next time someone installs my CLI I don't know exactly what code they're getting (unless I shrinkwrap which is uncommon).

Yes, this is by design. If this weren't the case, the ecosystem would be an absolute minefield of non-updated transitive dependencies with unpatched security issues.

Probably off topic, but just want to say cargo does the same things on the Rust side, and it has been annoying me to hell.

And it's even worse in cargo, because specifying "1.2.1" means the same thing as "^1.2.1".

I feel the real issue here is downstream package consumers not practicing proper dependency pinning. You can blame the Node ecosystem, the maintainer of the package, etc. but there are well-known solutions to prevent this kind of situation.

This wasn't a big problem due to a package being suddenly upgraded in existing code. It's because a scaffolding tool (Create React App) used to set up new projects would set those projects up with the latest (presumably patch, maybe minor) version of the dependencies. In other words, because those projects did not exist yet, there was nothing to pin.

Unless you mean Create React App should pin all of their (transitive) dependencies and release new versions multiple times a day with one of those dependencies updated.

So you would exchange security for stability, if you use package pinning then you will end up with fosilized packages in your product, which will have all maner of security issues that have alresdy been fixed.

You can always use something like dependabot, which should help you quickly upgrade versions and also protect you from breaking your build.

If a package doesn't provide a stable branch that will receive security updates then it's not mature enough to be used anyway. That's the sensible middle ground between bleeding edge and security, unfortunately most packages/projects aren't mature enough to provide this.

There's a reason companies stick with old COBOL solutions, modern alternatives simply aren't stable enough.

I get notifications to update my Rails apps from GitHub as a matter of course when there's a CVE in my dependencies. Does this kind of thing not exist/is impractical for JS?

From my experience of getting ~30 of those notifications per week for a handful of JS repos, I can very much assure you that it does exist.

As a fellow commenter said, you would ideally use something like dependabot or greenkeeper/snyk.

I think these one-line-packages aren't the right way to go. Either JS-developers should skip the package-system in that case and just copy and paste those functions into their own project or there should be more common used packages that bundle these one-liners. I mean is_promise() and left_pad() are not worth their own package. Packages-dependencies of 10000 packages for trivial programs are just insane.

Is someone going to fix that?

>Is someone going to fix that?

Probably not. There is too much code in the wild, and NPM owns the entire JS ecosystem, and there has been too much investment in that ecosystem and its culture at this point for a change in course to be feasible.

The JS universe is stuck with this for the foreseeable future.

It's just a cultural problem. There's no reason why a library should abstract away `typeof obj.then === 'function'` if they want to check if something is a promise. Just write a one-liner the same way you don't pull in a `is-greater-than-zero` lib to check x>0.

The problem is when you try to level criticism at this culture and a cloud chorus of people will show up to assert that somehow tiny deps are good despite these glaring issues (a big one just being security vulns). And funnily enough, the usual suspects are precisely people publishing these one-liner libs. Then people regurgitate these thoughts and the cargo cult continues.

So there's no "fix" for NPM (not even sure what that would mean). I mean, anyone can publish anything. People just have to decide to stop using one-liner libs just because they exist.

Does it need much to change? I didn't mean to fix NPM. The problem is the non-existing standard-library. Just create one that everybody will use and everybody could cut their dependencies by thousands.

Several of these already exist, like lodash and underscore (which is a subset of lodash). After the rapid improvements on both the browser and node sides of the last couple of years (which filled in many of the blanks in this hypothetical "standard library"), they are less necessary than they may have been before. Also they can become something of a crutch. Fixing a bug a couple of days ago, I realized that an argument of Object.assign() needed to be deep-copied. Rather than adding a dependency for lodash or underscore or even some more limited-purpose deepcopy package, I just figured out which member of the object needed to be copied and did so explicitly. Done.

Another good way to not have to depend on big/tiny/weird modules published by others is to use coffeescript. So much finicky logic and array-handling just goes away.

Not everyone would use it, that's my point. The inertia behind the existing system is too great, especially in enterprise. All that would happen is that library would become just another Node package, and then you've got the "n+1 standards" problem.

The "nonexistent standard library" wasn't a problem in the days when javascript development meant getting JQuery and some plugins, or some similar library. It only became a problem after the ecosystem got taken over by a set of programming paradigms that make no sense for the language.

Yes, in my mind you'd have to change everything from the ground up, starting with no longer using javascript outside of the browser.

> Not everyone would use it

If the right people would provide the library, it would be used by enough people.

> Yes, in my mind you'd have to change everything from the ground up, starting with no longer using javascript outside of the browser

Whats the point of inside or outside of the browser?

The point is that different languages are best suited to different tasks. Javascript is a simple, very loosely typed scripting language with prototypal inheritance that was developed to be run in the browser. It's a DSL, not a general purpose programming language. Using it elsewhere for applications where another language with stronger and more expressive types would be more appropriate requires hacks like compiling it from another (safer, more strongly typed) language like Typescript, which still results in code that can be fragile because it only simulates (to the degree that a JS interpreter allows) features that the language doesn't actually support.

See the attempt to "detect if something is a Promise" as an example - the function definition for the package makes it appear as if you're actually checking a type, but that's not what the package does.

Most of the unnecessary complexity in modern JS, as I see it, comes from the desire to have it act and behave like a language that it simply isn't.

> It's a DSL, not a general purpose programming language

Sorry, but I fear that ship has sailed ;-)

And I've heard JS was developed by someone who wanted to give us Scheme (you can't go more general purpose than that) but had to resort to a more "friendly" java-syntax. IMHO javascript would be a great general purpose language if the ecosystem wouldn't be such a mess.

>Sorry, but I fear that ship has sailed ;-)

I know, I know. If anyone needs me I'll be in the angry dome.

Isn't this what "utility libraries" like lodash and jQuery (each for their respective domains) are for?

I see a lot of criticism to one-line packages, but IMO in the end what matters is the abstraction.

Thinking of the package as a black box, if the implementation for left-pad or is-promise was 200 lines would it suddenly be ok for so many other packages to depend on it? Why? The size of the package doesn't make it less bug-prone.

I see plenty of people who are over-eager to always be up-to-date, when there really isn't any point to it if your system works well, and so they don't pin their versions. This will break big applications when one-line packages break, but also when when 5000-line packages break. Dependencies are part of your source, don't change it for the sake of changing it, and don't change it without reviewing it.

> The size of the package doesn't make it less bug-prone.

Of course it does. It's more bug-prone just by being a package. More code is more bugs and more build-system annoyance is more terror (=> more bugs). If I only need one line of functionality I will just copy and paste that line into my project instead of dealing with npm or github.

> Dependencies are part of your source

I agree. If you see news about broken packages like this and you don't just shrug your shoulders your build-system might be shit.

It would be more ok if left-pad was part of a package called, say, text-utils which also included right-pad, etc. Same with is-promise, it sounds like it should be a function in a package called type-checker.

Weinberg's Law: If Builders Built Buildings the Way Programmers Wrote Programs, Then the First Woodpecker That Came Along Would Destroy Civilization.

Why are So Many of the Words in This Comment Capitalised? Is it a Title of Something?

That's the soft in software.

This sounds very clever but the nature of software development is quite different from building buildings. The rate of innovation is by magnitudes higher. And as opposed to buildings software can tolerate a certain amount of failure.

"Pfft. Chickens don't even know what a road is!"

Everyone crying about this on the Internet would do better to just take it as an easy lesson: pin your dependency versions for projects running in production.

This was an honest oversight, and even somewhat inevitable with so many expected supported ways to import/export between cjs mjs amd umd etc. It will happen again.

And when it happens the next time, if it ruins your life again, take issue with yourself for not pinning your dependency versions, rather that package maintainers trying to make it all happen.

And everyone who depends on projects that pin their dependency versions gets to be victims of security exploits long after they are fixed.

Dependency management is not as simple as you seem to think.

The "magical security updates" theory has never worked. Breaking insufficiently-pinned dependencies are vastly more common than unnoticed fixes on patch releases. On balance, semver has been good for javascript, but to the extent it contributed to the popularization of this dumb theory it has been bad. Production apps (and by a transitive relation, one supposes, library modules) should be zealously pinned to the fewest possible dependencies, and those dependencies should be regularly monitored for updates. When those updates occur, tests can be run before updating the pins.

Yes -- pinning dependency versions does not have to be at odds with security.

In fact, how secure is it, really, to keep dependencies unpinned and welcome literally /any/ random upstream code into your project, unchecked? This is yet more irresponsible than letting dependencies age.

But even then, it's not as if you have to choose -- you can pin, then vet upstream updates when they come, and pin again.

Right, making significant changes to the entry points of a library should be marked as a breaking change and bump the major version.

Well, I guess you can choose whichever poison you like.

Pinning isn't meant to be a forever type of commitment. You're just saying, "all works as expected with this particular permutation of library code underneath." And the moment your dependencies release their hot-new you can retest and repin. Otherwise you're flying blind and this type of issue will arise without fail.

The unspoken assumption is that you don't just pin and move on with your life. You take as much ownership over your package.json as you do with your own code, and know that you must actively review and upgrade as necessary (as opposed to just running "npm install" and trusting in the wisdom of the cloud)

And everyone who just upgrades whenever possible gets to be victims of security exploits too.

Users of Debian Stable missed Heartbleed entirely. It simply never impacted them.

I’m working on a thing I’m calling DriftWatch that attempts to track, objectively, how far out of date you are on dependencies, which I call dependency drift. I’ve posted about it here before [1]. I’m using it in my consulting practice to show clients the importance of keeping up to date and it’s working well.

I agree with the parent that it’s important to lock to avoid surprises (in Ruby, we commit the Gemfile.lock for this reason), but it’s equally as important to stay up to date.

1. https://nimbleindustries.io/2020/01/31/dependency-drift-a-me...

There are commercial tools like blackduck, sonartype/nexus which are used to sczn dependancies of not1 just node code, and highlite ourof date packages, known vulnerabilities, and license problems.

Tools like Safety can help in the python world, https://pypi.org/project/safety/, and cargo-audit https://github.com/rustsec/cargo-audit in the rust world. Stick them in your build chain and get alerted to dependencies with known exploits, so you can revisit and bump your dependency versions, or decide that that project is not worth using if they can't be bothered to consider security to be as important a feature as it is.

What you do is you pin dependencies, then automate regular dependency upgrade PRs. If your test suite and your CI/CD pipeline is reliable, this should be an easy addition.

We run Dependabot in our CI pipeline to flag security upgrades, and then action them. I'd much rather have that manual intervention than non-deterministic builds.

There are tools out there (like npm audit) that can alert you to known vulnerabilities.

Exactly: pin dependencies to avoid surprises, and use a CI to test compatibility of new versions, so you can deploy security updates on your own schedule, best of both worlds.

Github even bought Dependabot last year, so it's now free.

> pin your dependency versions for projects running in production

Works for existing apps, but people using create-react-app and angular CLI can't even start a new project.

Nah, create-react-app and others could easily pin dependencies of libraries they install in your new project to known-good versions.

Without doing that bit of diligence, this type of issue should be 100% expected.

Then you can’t upgrade anything unless create-react-app releases a new version (or you eject), which, in addition to the obvious release cadence problem, might introduce other compatibility problems.

By doing that they would avoid this issue, for sure. They would also introduce security issues by using old versions.

And this would do nothing for the fact that `npm install eslint && ./node_modules/.bin/eslint` was also failing.

Pinning dependencies might introduce security issues.

Not pinning dependencies is a security issue.

It's not like pinning means you can /never/ update. You just get to do it on your own schedule.

You can even automate updating to some degree -- running your tests against the latest everything and then locking in to those versions of all goes well.

Again, this only works for project skeletons, and not for any other package that happened to have a transient dependency on `is-promise` (which is a lot more than project skeletons).

I don't know much about those projects, but why did this break them? Are they not pinning versions?

Because they are starting a new project from scratch and would have nothing to pin their dependencies against?

Maybe I'm misunderstanding how those projects work. From what I recall, they generate a project, including the package.json. So I'm not sure why they couldn't just generate the package.json with pinned versions?

I don't write much JS, and have only used create-react-app just a few times, so feel free to explain why this isn't possible.

package.json only lists top-level dependencies. package-lock.json tracks all dependencies, and dependencies of dependencies. is-promise is one of those dependencies of a dependency, which you don't have much control over.

How would a top level dependency change versions if it bumped a transitive dependency? Is that a thing in js-land?

How could a dependency-of-dependency change version if one of the direct dependencies doesn't change version? I guess, if the direct dependency isn't pinning that version? Another case of, everyone should be pinning dependencies.

Exactly, node's conventions are to allow a range of versions (semver compatible). True, if all dependencies were pinned, this wouldn't come up as often.

That also means that there would be a lot more updating when security issues are found.

I'm a novice in this area but if your project relies on a bunch of external node packages why wouldn't you download them all and host them locally or add them to version control?

Adding them to your own version control is a nightmare: Your own work will drown in all the changes in your dependencies. The repository will quickly grow to gigabytes, and any operation that would usually take seconds will take minutes.

It's also just not needed. Simply specifying an exact version ("=2.5.2") will avoid this problem. The code for a version specified in this manner does not change.

Yes, putting your dependencies in version control alongside your project is no fun. Commit history is muddied, but also if your production boxes are running on a different platform or architecture than where you and your team develop, that can make a big mess too.

That said, with a big enough team and risk-averse organisation, it can be a brilliant idea to put your dependencies in /separate/ version control and have your build process interact that way.

In that scenario, even if your dependencies vanish from the Internet (as happened with left-pad), you are still sitting pretty. You can also see exactly what changed when, in hunting for causes of regressions etc.

To me an even bigger nightmare is your entire project or product depending on some external resource you don't control.

Checking them into your repo is called “vendoring” and it’s one way of solving the problem, yes. Personally, it’s my favorite approach. But it does have some challenges, as other commenters point out.

You'd use a proxy, yes.

> pin your dependency versions

And then to see "npm detected 97393 problems" or whatever the message exactly is.

You don't need to pin them forevermore -- just when you don't want everything to break unexpectedly :).

When you want to upgrade your dependencies, then go ahead and do that, on your own schedule, with time and space to fix whatever issues come up, update your tests, QA, etc.

That’s good: it’s easy to update and it means you do it in a controlled manner rather than the next time something deploys.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact