Hacker News new | past | comments | ask | show | jobs | submit login
Yarn's Future – v2 and beyond (github.com)
380 points by arcatek 3 months ago | hide | past | web | favorite | 232 comments

Very happy to see yarn.lock will finally be a proper format that won't need its own parser. YAML subset is a pretty good choice, though I think canonicalized, indented JSON would be a better choice for this use case. Incidentally that's what npm uses as lockfile, I wonder if there's room to have the two package managers share the format (or even share the file itself).

Very excited to see shell compatibility guarantee in scripts as well. Using environment variables in scripts is a pain right now.

Finally one of the biggest news is the switch from Flow to Typescript. I think it's now clear that Facebook is admitting defeat with Flow; it brought a lot of good in the scene but Typescript is a lot more popular and gets overall much better support. Uniting the JS ecosystem around Typescript will be such a big deal.

npm's lockfile is a pain to diff in PRs because of the JSON format where what was maybe 20 changed lines in yarn is upwards of 80 from the brackets.

With YAML and whatever format yarn.lock was in, the only changed lines are changes to the version resolutions, hash and dependencies.

I'd say safely merging YAML diffs however could be trouble.

I don't know how restricted their YAML subset is, but in my experience it's so loose a format the only way to be sure YAML says what you think it says is to run it through a parser.

I think if you're merging lockfile diffs, you're doing something wrong! Merge the package.json diffs and regenerate the lockfile.

Yarn automatically resolves conflicts in yarn.lock if you run "yarn": https://github.com/yarnpkg/yarn/pull/3544

If you're regenerating lock files, you're losing all your locks for the stuff that didn't change, which can lead to unpredictable bugs.

Fortunately as someone else replied, both yarn and npm have safe and easy ways to resolve merge conflicts in their lock files.

> safely merging YAML diffs however could be trouble

Yarn will actually do the merging automatically — if you have conflict markers in your lockfile, just running yarn will parse them along with the rest of the file and produce a new lockfile with the changes from both diffs (unless there's a genuine conflict).

I assume that this feature won't go away with the new lockfile format

I hope so, but that feature wasn't there from the start: https://github.com/yarnpkg/yarn/pull/3544

Hopefully they'll be able to re-use much of that work for the Yaml file.

They do say "subset of YAML", so presumably that can make it easier. And hopefully they'll keep handling this for you.

As a heavy Open API Spec user, I can tell you that YAML is a nightmare to diff.

Diffability is not a great argument against YAML/for JSON IMO. Tools can usually handle that for you.

What is the largest concern about YAML is that truncated documents are almost always still valid documents. The likelihood of that happening compared to, say, a git merge gone wrong is much lower, but the consequences are likely much worse.

I don't have a strong opinion on structured data file format, but that's an issue with YAML that often goes unmentioned.

If you run `npm install` again it does detect the conflict and resolve it. https://docs.npmjs.com/files/package-locks#resolving-lockfil... But in practice I haven't found any changes that have been untenable to understand with a diff.

resolving conflicts isn't why I read the diff (yarn resolves automatically anyways) it's so I can see what has changed

> I think it's now clear that Facebook is admitting defeat with Flow.

It sounds to me more like they're testing the waters. They're definitely opening up, with the whole Jest and create-react-app supporting it, but as far as using it themselves would this be the first project to be migrated?

Jest is getting migrated to Typescript as well, not just adding support for it.

And then there's... whatever this is:


>And then there's... whatever this is:

I respect Jamie for the things done and built, but his views tend to be a bit extreme.

I agree that flow is not in a good place right now, but to say that they don't give a shit about the community just isn't true.

Flow has been making significant strides in community outreach, inclusion, and detailing their roadmap lately. And Facebook dropping flow would bring us toward a monoculture which I personally think is a bad thing (it's always good to have some competition to keep new ideas flowing and stuff improving), and Jamie advocating for it seems misguided and shortsighted.

Flow happily accepts contributions from outside people, but because of the somewhat esoteric language choice (ocaml) and the fact that it's significantly smaller in terms of community usage means that there isn't that many people to contribute anyway.

>I respect Jamie for the things done and built, but his views tend to be a bit extreme.

They might be in general, but these sound like some easily objectively checkable claims:

"If you want visible proof of this [Flow not caring for the community], just look through the repo. PRs are never merged. Issues are never addressed. Pretty much all of the activity there is done by a couple people in the community. Critical issues stay open for years. They do all of their development internally"

PRs are never merged because the github repo isn't the source of truth for Flow, their internal systems are. They have this workflow where they will import PRs from github and apply them internally, then close the PR when it's done.

At a glance, it looks like all PRs are just closed, but looking at them individually shows a different story.

It's the same with React-Native, and several other OSS projects by facebook, and Jamie knows that, having worked at FB when that workflow was used.

It has its faults (I personally really hate the workflow they use, but I get why they use it. Github is where the people are, and Phabricator and other internal tools are difficult to integrate into it in a lot of cases, but I still hate it), but to say it's because they don't care about OSS or they do all their work internally isn't true. Plenty of FB employees and outside contributors both make PRs on github, discuss them on github, and just merge them internally closing the PR on github.

Flow has plenty of issues without needing to make them up or exaggerate them. Their errors still take weeks for many to understand and get used to, flowtyped is a mess compared to typescript's type distribution system, their fucking habit of single character type variables makes looking at built in type definitions extremely difficult, the flow binary still is extremely unstable on windows even after years, and the recent move of aliasing Object and Function to `any` seems like a misguided attempt to simplify some code and speed up some checking.

But at no point do I think they should can the whole thing, and I absolutely think they care about open source. Like I said in the comment above, Flow is in a bad place right now, but they are putting their money where their mouth is, the Flow team is reportedly growing at FB, and over the past few months I've seen a massive uptick in the number of releases, blog posts, external contributions, performance, and a significant decrease in the number of bugs I was hitting on a daily basis.

I agree. Flow's communication with the community has been so mediocre that even Dan Abramov resorted to tweeting his frustrations.

At this point, the attempts to pacify Flow users like myself feels an awful lot like when Silverlight developers were begging folks not to abandon ship. People have to defend their livelihood, I guess.

He’s right. I’ve been using Flow for 2 or 3 years, the tool does the job but it has many bugs and NO support AT ALL. Same for the communication, no roadmap, no yearly post about what’s going on, nothing. I started moving our projects to TS too.

Oh, gotcha! Missed that comment.

Nah YAML is better because of diffing.

JSON uses commas `,` which will show multiple changes when the last item of a list changes.

Side note: it's really sad that more languages aren't allowing trailing commas in sequence literals and other places, like Python does. In Python, these are legal:

   [1, 2, 3,]
   foo(1, 2, 3,)
And thus, when writing multiline literals and calls, it's pretty common to use the comma as a line terminator, precisely so that new lines can be added at the end without touching the previous line.

Javascript DOES allow trailing commas, though it never caught on since IE6 would blow up on them.

JSON, however, does not.

> The codebase will be ported from Flow to TypeScript. To understand the rational please continue reading, but a quick summary is that we hope this will help our community ramp up on Yarn, and will help you build awesome new features on top of it.

Another major project moving from flow to typescript

This sucks because, in the first few years, flow had much better soundness and typescript had some serious issues. I'm a little disappointed and feel as though, similar with Kube vs Swarmkit, the worse technology is winning.

Things are quite different from the first few years. Typescript has largely converged with the capabilities of Flow as a type system, though from its "different direction". It's not entirely ML-ish algebraic types, but it's a usefully fair approximation that has a lot of pragmatic tools to get the job done. Similarly too, the type inferencing engine isn't a "proper" ML-ish one, but it's gotten very good at what it does, again to the point where things are starting to feel a lot more similar to Flow's support than ever. Especially because Typescript is getting a huge workout as the inferencing engine in VSCode, Atom, VS Proper (sometimes), and increasingly more IDEs even for raw JS projects with little to no type information.

Even if you don't believe in the old Unix adage that "worse is better", Typescript today isn't that demonstrably worse than Flow. Depending on your metrics, such as general availability of community-supported type information, Typescript in increasing ways better.

Anders (and the people he works with) has a track record of producing useful pragmatic tools/languages that help developers develop going back about 3 decades.

I've yet to find anything he's done/worked on that I don't like, one of my programming heroes actually.

I worked with Typescript for 2 years, then worked with Flow for 2 years and now I'm back on Typescript working for a big company.

I agree that Flow had better capabilities around soundness. But the tooling around Typescript really made me jealous, specifically in VSCode.

Near the end of working with Flow, Typescript was getting some cool capabilities like refactoring JSX for React apps.

Nowadays I can easily say that Typescript is a far better experience than Flow. There are updates every two months, adding some neat features that you might find in other languages.

Then you add in the power of surrounding tools and their ecosystems like TSLint and it really feels like a next-level coding experience where the tools start writing the mundane code for you, driven the core TS static analysis.

I'd agree with everything you've said. Flow isn't bad but after working with TypeScript I could never go back. You're spot on with the tooling and ecosystem.

Also, TypeScript has a roadmap and when you report a bug, the devs actually reply. Flow doesn't have any of that.

and now TSLint is merging with ESLint so all that effort and rules duplication will be going away

I don't think that holds anymore. Have you used both Flow and Typescript recently? Speaking from experience: I can't imagine anyone doing so and concluding that Typescript is the "worse technology" at this point.

Not recently. It's now on my TODO to plan a transition of our frontend projects to Typescript. Have been thinking about it since 2.9 when they finally introduced mapped types; before then we were using some of Flow's meta functionality to auto-type our reducers.

This[1] typechecks in Flow but not in TypeScript:

    /* @flow */

    interface HasIdentityFunction {
      id<A>(a: A): A

    class Example implements HasIdentityFunction {
     id<A>(a: A): number {
       return 42;

    var x: string = (new Example(): HasIdentityFunction).id("hello")
[1] https://twitter.com/puffnfresh/status/1077072700609159168

> This sucks because, in the first few years, flow had much better soundness and typescript had some serious issues

Most of not all of the issues had with types have been solved.

Flow had some major tooling/developer comfort issues from day one and none of those are solved. Not to mention a really closed development roadmap (understandable perhaps, but nonetheless bad).

As a developer, I'm glad TypeScript won. I want more tool/compiler/language/etc makers to understand that the developer experience matters. If someone makes a tool that only works well under one very specific setup and ignores everyone else, that's just very limiting.

This is certainly how I feel; there are large parts of my codebases that I don't think will convert well if I was to change, where we leveraged flow to great effect.

However I certainly take the point that flow has been developer hostile. When we have had issues it has been impossible to get a response (here is a demo, is this a bug or in the pipeline?).

Though flow is v0.91 and Typescript is 3.2, i don't know if i can really fault them that much?

I don't trust TS, they are too much 'move fast and break things'.

Typescript has been moving fast, but they haven't broken that many things. The API and Language Server have changed obviously four times from a semver perspective, but in general the language itself has been extremely stable with great backwards compatibility, and most of the type system strictness additions are behind opt-in flags making upgrades usually as gentle as you prefer them to be (depending on your attitude to strict type checking).

(I had projects that started with TS < 1.0 and they all still parse and compile today, albeit with tons of lint warnings, particularly to use a better module system than AMD with pre-ES2015 TS imports, and all sorts of new type strictness options to turn on to make them all the more type safe.)

> Yarn is currently fully covered by Flow types, and to be honest it works pretty well for us - the key part being “us”. Because we want to make it as easy as possible for third-party contributors to shim in and help us maintain this awesome tool, we'll switch the codebase to TypeScript. We hope this will help making the codebase feel more familiar than the projects your already contribute to.

Seems like Flow works just fine for them, they just want to make it easier for others to contribute.

Not only that, but it's another Facebook project moving from Flow to Typescript. Jest also announced a Typescript migration last week: https://github.com/facebook/jest/pull/7554

> The log system will be overhauled - one thing in particular we'll take from Typescript are diagnostic error codes. Each error, warning, and sometimes notice will be given a unique code that will be documented - with explanations to help you understand how to unblock yourself.

Why do programmers love error codes? As an end user, they are useless indirection to me and the only way this would even be tolerable is if the explanation was printed directly next to the error code, so why bother? Is it code quality, since you don't have to write a long error message when you emit a similar error? But doesn't that have the drawback of encouraging error code reuse when it might not be appropriate?


Thanks for the replies! I guess I could understand error codes for search ability that also provide information about the problem specifics.


  $ yarn add foo
  Error YARN1001: Incompatible peerDependencies.
  $ yarn explain YARN1001
  # Some longer text about how two of my modules have
  # incompatible peerDependencies
Better example:

  $ yarn add foo
  Error Yarn1001: Incompatible peerDependencies.
  * my-package@1.0.0
  |-* foo@1.0.0
  |-* bar@1.0.0
  |-* left-pad@1.0.1 (peerDependency of bar@1.0.0)
  |-* left-pad@0.9.0 (peerDependency of foo@1.0.0)
  $ yarn explain YARN1001
  # Some longer text about how two of my modules have
  # incompatible peerDependencies

Error codes are googlable.

Things like "TS1234", "flake8 E802", "Yarn E4882" etc are pretty much guaranteed to give you the results you want. Whereas "yarn some-error-text" can be much noisier, especially if the error text changes over time, is short, is obscure or even translated. Worst case they're also more greppable inside the codebase.

Error codes are a really, really good idea if your application is popular and used by devs, IMO.

> if your application is popular and used by devs

I'd argue even if it's not popular and it's used by non-devs.

Error codes saved our support people a ton of time when trying to understand user problems. It's good for the user to be able to understand the problem, but it's great when they can call you up or send a ticket in about error "E4882", and you instantly have a good amount of information about the problem.

As far as Rust is concerned[0] the error code is accompanied by a pretty extensive error message (with lots of arrows and suggestions) but the actual error code documentation is basically an entire book page, it would be completely unusable if that much stuff were printed to the terminal. So the compilation error provides an explanation and the error code links to an expanded explanation.

For instance, this is the basic "use after drop" error message:

    error[E0382]: borrow of moved value: `s`
     --> src/main.rs:5:20
    4 |     drop(s);
      |          - value moved here
    5 |     println!("{}", s);
      |                    ^ value borrowed here after move
      = note: move occurs because `s` has type `std::string::String`, which does not implement the `Copy` trait
this is the expanded explanation: https://doc.rust-lang.org/stable/error-index.html#E0382 On my machine, I have to "page down" twice to get through the entire thing. "rustc --explain E0382 | wc" tells me the markdown source is close to 110 lines and 550 words.

And as other people noted, hopefully other folks discussing the issue used the error code, which makes their discussion not just more easy to find but survive localisation: if you get a localised error message it's almost impossible to find information on it unless it's absolutely ubiquitous, because the vast majority of the discussions (and especially the most useful discussions) are going to refer to the non-localised version, which your software will not provide.

edit: I can understand taking issue with it though, some systems do use error codes to skimp on the actual error messages. I recall Oracle being a particularly bad offender.

[0] I'd expect Typescript is the same, but don't know for sure

The Rust compiler generates unique error codes for every type of error, and suggests that you run the command `rustc --explain E0275` (as an example) to get more information on the error, with an (often) in-depth description of what the error means, how it is usually caused, how to potentially fix it, etc.

This is extremely useful, because the error output can focus on just saying what the error is (which is great when you already know what the error means), and there is an obvious next step to get more information if you need more help.

And if you don't find the answer, searching online for "rust E0275" gives much more relevant answers than trying to search the right parts of the error message.

In general it improves interoperability between non-human systems.

Say that you have a database. When you ask it for a specific piece of data, and it can't find it, it shows 'data not found'.

Then I build a program that reads information from that database; I program it so that if it receives the text 'data not found' it knows to handle the error somehow.

Ten days later, the guy who programs the database decides that 'oops! we couldn't find the element :(' is a more friendly message to the user. Now my program will stop working until I switch it to the correct error text.

With an error code those kind of things don't happen. If you need extra legibility you can totally send both a code and a message, but the code is expected to remain unchanged and I can trust that it will stay the same in the future.

Grouping behaviours is another use. For example, if I have a system that sends you information and you have sent me wrong input, there's a dozen ways you could have done that (maybe you didn't send me enough data, or you sent it in chinese characters I don't recognise, or I just got gibberish I can't even start to understand). All of those cases might require different messages to the final user, but internally for me they're the same thing ('invalid data') and the things I'll have to do will be the same, so propagating a code serves me well.

There's also the issue of internationalization. If 200 android users across the world are having problems with their phones, and they all get an error 3242, they'll be able to find proper help. If one is showing "the application couldn't start due to memory issues", the other "la aplicacion no pudo iniciarse por problemas con la memoria" and yet another shows "لا يمكن بدء التطبيق بشكل صحيح" we're gonna have trouble identifying all those things as the same problem.

Humans are not always the consumer of error output. Distinct error codes simplify parsing and eliminate ambiguity.

It also makes it easier for developers look up a specific error in the documentation, assuming it's been documented.

It's also easier for a (technical) user to reference they got error 1018 than copy & paste "An index signature parameter cannot have an accessibility modifier." Especially in titles and when referencing it multiple times within a body of text.

How does an error code make it easier to look up an error? The workflow is either “get error, google it”, or “get errorcode google it”. In both cases the docs will be the top hit.

If we where talking 20 years ago I might agree, but I really can’t see the argument with todays tooling.

Updates can improve on the error messages, making them more clear, the error code does not change but the message changes.

Also many times the error message is interpolated with your specific details like

"Syntax error at line 123 in file /home/user/myfile , symbol X is not allowed here" this is a silly example but often enough when i google this errors I have to first strip out my data from them.

Googling the text of an error often finds lots of unrelated results. So no, the docs might not be the top hit.

All errors are google-able?

All the responses so far are seem really helpful but also quite specific. It might also be useful to understand the problem space at a higher level. We also see this in good database design. Which says that every record should always have a unique id that is not related to any of the other data. This allows you to freely change things like product descriptions, short name, SKU, price, weight, etc. Because you know that no matter what else changes, you have a unique ID to refer back to, that will get you to that product. Since a unique ID is often a random looking string, there is nothing stopping you from also having a human friendly lookup code for a product. For example, unique id is 186746, but the lookup code is BRWNCOUCH37. If the couch doesn't change color, the lookup code stays good. But if it does, you can change even the lookup code. Because the product unique id remains 186746.

So unique IDs are useful in many places other than just error codes. Hope that helps.

That's exactly the point. The unique error code matches up with documentation so you can easily research your problem. Especially helpful with Google. Yarn "some error happened" is a lot more difficult to track down than Yarn "Error 3617" even if Error 3617 doesn't immediately provide more information.

Many scripts rely on the error's in order to decide what to do. When you have error codes, you can then change the actual description of the error without breaking the scripts that rely on the error codes. You can also show the errors in different languages depending on the user's settings.

To add to the other replies - you quickly find the utility of error codes when working on a product that ships in many different languages. Diagnosing a bug report from, say, a Chinese user can be a lot easier if the screenshot with the error message also has the code in it somewhere.

You can also ask "why do programmers love GUIDs" and probably receive a similar response.

The HTTP protocol is another famous example of user-facing error/status codes that don't necessarily mean anything on their own.

This isn't really relevant for something like Typescript, but for traditional applications, error codes can help with establishing observability metrics if properly collected, aggregated and analyzed.

IMO Error codes are easier to search then english error descriptions.

I’m glad that Yarn will continue. Npm has improved, but it’s still less pleasant to work with than yarn (which basically always does what I expect, not so for npm).

npm has eroded so much of my trust that I am hesitant to switch back to it any time soon. I've tried npm out every few months (since npm 3), and have consistently run into infuriating bugs or unexpected behaviors.

Much of it has been fixed over time, but the frequency and duration of these issues is concerning—and, I think, points to architectural deficiencies being the root of the problem. (And the project is so massive that it's understandably a really challenging thing to manage and triage)

For example:

* npm 5.0.0—5.7.0 didn't play nice with git-based dependencies (https://github.com/npm/npm/issues/17379)

* npm 5.0.0—5.4.1 edits package-lock.json unexpectedly (https://github.com/npm/npm/issues/17979)

* npm 5.0.0—5.4.? doesn't honor incompatible version differences in package.json compared to package-lock.json (https://github.com/npm/npm/issues/16866)

* Take a look at the issues labeled as [big-bug], and how long they've languished (mostly from the v3 era): https://github.com/npm/npm/issues?q=is%3Aissue+is%3Aopen+sor....

* and a bunch of others I can't remember off the top of my head; especially nondeterministic behavior in the v2 and v3 era.

* If you have OS-specific optional dependencies (Mac-only fsevents being a popular one, used by tools like webpack and watchify to massively speed up rebuilds on Mac), then if you run `npm install` on an OS that doesn't support them, then they get removed from package-lock.json. Then when the Mac user pulls the changed package-lock.json and reinstalls dependencies, fsevents isn't installed, and webpack and watchify fall back to a very slow path for watching for file changes. https://github.com/npm/npm/issues/17722

* Whenever I, a coworker, or our CI system pulls changes, we need to make sure that we have the correct dependencies installed because package.json or the lockfile may have been updated. We don't necessarily know if anyone else has changed package.json; we just want to run a command that makes sure node_modules matched the package.json and lockfile. Running `yarn` when there are no changes to package.json takes under a second, so it's easy to just always get in the habit of running, and doesn't slow down CI. We can make our build and deploy scripts just run `yarn` to be on the safe side because it's so cheap to run. But with npm, running `npm install` when there are no changes to package.json in a big project still often takes 10-30 seconds.

* I've run into many bugs like this one with npm: https://github.com/npm/npm/issues/19839. For a while, I actually made our deploy script run `npm install` in a loop until it stopped changing things just to be sure it successfully installed everything (but then it turns out running `npm install` multiple times can actually cause issues! https://github.com/npm/npm/issues/18084. To work around that, if you pull changes that include a change to package.json, you have to remove node_modules and then run `npm install`. This made our CI system so slow...). I've reported various bugs like this. The bugs would get no attention, but sometimes they'd mysteriously go away after a few versions. But bugs that go away on their own tend to come back on their own in my experience.

Yarn has only given me one issue in my use of it and it was promptly fixed. I swear by it now.

Even the latest version of npm 6 can get dependency resolution wrong. Like, the basic core feature.

I switched back to npm with version 5 and it's been great. Is there a list somewhere of feature parity between both?

Which part of npm acts unexpectedly?

Not that this is a showstopper, but I have issues with the package-lock.json file where the `resolved` field (the package's registry URL) constantly flip flops between http and https protocols, depending on which machine I'm on (home, work, or docker container), whenever I run `npm install`. Sounds not so bad, but it becomes a mess in git, and causes any docker build caches to become invalidated.

That sounds pretty bad and I'm not so sure it's a npm bug. Do you have a diff of the change in question? Is it on the npm registry or a custom one?

It's on the npm registry, affecting the npm client: https://npm.community/t/some-packages-have-dist-tarball-as-h...

A very similar bug that we run into sometimes: https://npm.community/t/npm-install-or-npm-update-turns-a-bu...

Oh man, I get that all the time, hate it so much.

Recently bumped into https://npm.community/t/packages-with-peerdependencies-are-i... The attitude towards fixing issues ("we won’t be able to fix until we get through an upcoming tree builder rewrite.", yeah right...) is making me want to check out alternatives

One thing that comes to mind is the lockfile not actually acting as a lockfile.

Running ‘npm install’ updates the lockfile :/

Then there was the issue where it didn’t respect git commit hashes that were added to the lockfile, just taking whatever the latest commit was.

We switched around June last year to yarn because git URL dependencies on branches was broken in npm. It would choose incorrect commit ids inexplicably. Beyond that dependency upgrade time is faster.

Just moved to Yarn this weekend strictly because npm link (used to link to a local version of a package) doesn't work the way I want it to.

Everytime I install a new package my previous `npm link` references break in the node_modules folder. `yarn link` keeps these reference. Just as I expect.

I just add a symlink to the folder inside node_modules (so it doesn't need to be on package.json), not the cleaneast but gets the job done.

The JS ecosystem has its flaws, but one has to appreciate the speed at which momentum shifts, making clear winners obvious.

The move towards TypeScript 'winning' has been fast, and to everyone's benefit.

> , and to everyone's benefit.

Why? I've worked on large codebases in Coffeescript, ES6 and Typescript. Whatever this whole community sings and believes, but Coffeescript still wins for me. ES6 is still trying to catch up but will probably never reach the beauty and ease of Coffeescript. Both are transpilers, only ES6 with Babel is a total horror to manage (just upgraded a large codebase to Babel 7..).

Typescript takes about 2x the time to write if you want to create all your typings properly. I hear you say; only in the beginning, later it will speed up the development process. I've never seen that in reality! I've actually never seen a proper codebase in Typescript. Show me a Typescript codebase not using the type 'any'! In a decent system language you can't get away with that, it's just a fake sense of security.

A good codebase should not be dependant at all by Typescript or whatever hype comes next. Writing a good codebase is IMHO a craft and should not depend on the language or a bunch of tooling. If Typescript is way to go, what about Python, Ruby, abandon it, deprecated? Are those inferior languages compared to Typescript? Typescript is just another hype, very smart play by Microsoft btw.

> Typescript takes about 2x the time to write if you want to create all your typings properly.

Not true at all. You balance all of this stuff inside your head anyways: this object has this shape, this function takes these arguments, etc. The only overhead is actually writing them down -- which in itself arguably speeds up development because then your IDE knows about them too.

I pushed back migrating from Coffeescript to TypeScript and I consider it one of the only times I was really wrong about a front-end technology.

> I've actually never seen a proper codebase in Typescript.

That's the problem then. I have, and have clearly seen in the real world why it's superior. ES6/Coffeescript can obviously be done correctly, but chaos tends to ensue as the flexibility is abused. Over time it makes debugging/understanding difficult. TypeScript makes changing/navigating large codebases a breeze. In other words, it's much easier to do correctly than ES6/Coffeescript.

There are multiple benefits from having TypeScript a clear winner.

1. One standard typed version of JS (As appose to flow and TypeScript) is a better use of open-source developer time. It also reduces decision fatigue when architecturing new JS projects. 2. TypeScript tooling is fantastic. Using VSCode or Webstorm spoils you. 3. TypeScript is a testing ground for experimental language features. It can shape the direction of future versions of JS. 4. Finally, the language is clearly well liked, in addition to being popular. We can debate the pros and cons of its quirks, but overall, the stats say developer satisfaction is high. https://hub.packtpub.com/4-key-findings-from-the-state-of-ja...

> ES6 is still trying to catch up but will probably never reach the beauty and ease of Coffeescript. Both are transpilers, only ES6 with Babel is a total horror to manage (just upgraded a large codebase to Babel 7..).


> Both are transpilers

You don't have to transpile es6 if you don't want older browser support.

> Show me a Typescript codebase not using the type 'any'! In a decent system language you can't get away with that, it's just a fake sense of security.

I don't know the comparison to system languages is fair though, because the use-case for JS is quite different from system languages.

Javascript (and by extension, Typescript) is commonly used to interface between the user and the network, both of which often are outside the bounds of the type system. Add to that any code that interfaces with plain JS, such as external libraries or legacy code. When dealing with those, it's natural to use statically untyped values and type ascriptions based on reasonable assumptions.

Taking that into account, I actually think Typescript's type system is fairly well-designed for the use-case. The problem isn't really with Typescript, it's just intrinsic to the use-case of JS.

Exactly. Even strongly-typed languages have this problem. In C, it looks like `void *`, while in Java, it's `Object`. The `any` keyword is just the latest in a long line of escape hatches. Pretty much every language has one.

> Even strongly-typed languages have this problem. In C

C is clearly statically typed, but among statically typed language it is quite weakly typed.

> Show me a Typescript codebase not using the type 'any'!

Show me a Java project without an unchecked cast... I’m not a CS person but my impression is that sometimes you need these types and they exist in the type system for a reason. No type system is conceptually “perfect” as in there are soundness/expressively tradeoffs. I don’t think using ‘any’ is always bad. Sometimes it’s even right?

There are type systems that don't have anything like "any" - you find that in languages like OCaml or Haskell.

But in TS, the main reason for "any" is interop with JS.

Thanks. So in OCaml/Haskell you can write perfectly sound programs?

Haskell's type system still isn't sound. It just elimates that particular class of errors.

A lack of soundness is closely related to the Turing completeness, especially in languages based on HM. In an ideal world you would want a sound type system and not want Turing completeness, but the tooling doesn't exist to make those choices the most pragmatic right now.

Well, I mean, you can write perfectly sound programs in Java as well, or in TS. It's rather a question of how hard/easy it is to do so. Technically, any language that has either direct memory manipulation or FFI has the potential to do unsound things about types, but it takes a lot more effort.

> Writing a good codebase is IMHO a craft and should not depend on the language or a bunch of tooling.

Of course, relative to other code in that language you can write good code in any language and that is a useful and important skill. If you invest enough energy and craft, you can even write code that compares favorably to good code in any language - the Linux code base is a testament to that. Still, it is a fallacy to assume that all languages are created equal. Nowadays, next to nobody would argue that writing code in Assembler is a good idea. And even though it is now possible to write web apps with C (via webassembly) there is no movement by seasoned C programmers to conquer the web.

> If Typescript is way to go, what about Python, Ruby, abandon it, deprecated? Are those inferior languages compared to Typescript? Typescript is just another hype, very smart play by Microsoft btw.

While the truism "use the right tool for the job" is overrated, it applies here. Untyped, quick to write languages are excellent for small projects. When I'm writing Python as glue code, designing nicely typed interfaces is a distraction. Also, Python, Ruby and Javascript have base types with excellent usability. E.g. when you parse time sheet data, it is pleasant to return [(time.parse('9:00'), 'document project X'), (time.parse('10:15'), 'implement feature Y')]. But when projects grow every structure enforced by the language is a guarantee I cherish. You know, there is this special case in your time sheet parser where you return a (time.parse('23:59'), Null) element, instead of using the empty string, which made the code much more elegant. But months later, when you are refactoring the code calling the parser, you have totally forgotten this behavior and introduce a subtle bug. That's the time where I wish I had a robust type system.

TypeScript is great in this regard, as you can ignore all types if you wish and gradually add them later, when the complexity of the code base calls for it. Or you start with complete type coverage from the start if that floats your boat. Or you never introduce types and just consume JavaScript. Because of that, TypeScript is especially valuable for libraries: It gives the consumers of the libraries all choices. And it's also the reason the success of TypeScript is celebrated that much: Each new TypeScript enhanced library increases the effectiveness of the tool (and reduces the amount of `any` crutches).

> Show me a Typescript codebase not using the type 'any'!


Admittedly Flow, not TypeScript -- but 99% the same thing syntax-wise. And yes -- it was difficult to set up and get working comprehensively, but now it's there, it's fairly easy to maintain, and it's invaluable when refactoring, accepting PRs, or adding new features.

Most people using `any` should really be using `mixed` (flow) or `unknown` (ts) which at least force you to check before you use some property of those types.

We are doing backend and frontend in Typescript and have very limited 'any' usage. Mostly for some hacky library for some small isolated need.

I can the same way cast value to an Object in Java of NSObject in Swift.

Do you have any experience with static typed languages? All the benefits brought by static typing become obvious & make sense when you actually get to use it.

Also, I'm sorry but you make some weird comparisons. ES6, Coffeescript, TypeScript and Python/Ruby are 4 completely different things. Not sure why you are trying to compare them and choose a winner.

I agree with your sentiment but I find the location of it almost ironic. One thing I hate is when I go into a language and it's full of, "oh don't use the built-in, use this thing over here...".

If we want a clear winner to everyone's benefit, wouldn't we want Yarn to go away and for npm to gain whatever it's missing that makes Yarn relevant?

To be fair, I haven't touched Yarn in years. I switched to it, loved it, and then npm got the package-lock and some performance fixes and I suddenly didn't understand why I'd want to use Yarn.

I'm conflicted because on one hand I strongly believe that we're all better off using the same tools so we can help each other more easily. But that suggests that alternatives shouldn't exist, which stymies innovation.

I get what you are saying, but the two cases are different. Typescript is a whole different language from Flow, so they can't interoperate. Every package that migrates to Typescript deprives the Flow ecosystem of compatibility and mindshare, so a win for one is always a loss for the other. Nobody wants to be a loser, so this will eventually resolve itself into a single dominant choice.

NPM and Yarn don't have this problem, since they are pretty much drop-in replacements for each other. As long as `yarn install` and `npm install` both get the job done equally well, picking one becomes a personal choice. Every project that adopts a standard package.json is a win for both NPM and Yarn. There doesn't need to be a loser in this case, and diversity is good.

That's a really good point. Thanks for sharing. Drop in alternatives can be healthy and not that harmful. It's not like we are locked into two separate camps.

Do people typically prefer TypeScript over ES6?

It's a strange question. Writing in TypeScript is almost identical to writing in ES6. TypeScript is "just" ES6 with interfaces and static types.

Do people prefer typed JavaScript over JavaScript tho? I know that I prefer typed JavaScript, especially to write long-term web applications or Node.js server applications, but I don't think there is yet an unanimous shift towards typed JavaScript.

> Do people prefer typed JavaScript over JavaScript tho?

I'd like dynamic typing but without the implicit and dubious type coercion. Like Python.

Coming from a background of being comfortable with Ruby, Python, and Java; I started with ES6 a few years ago and moved over to Typescript last year. While ES6 makes Javascript much nicer, Typescript makes it way better and easier to maintain especially with a decent IDE. I already had a lot of respect for Javascript developers previously, but when I started developing node apps with ES6 it grew even more. I can only imagine the discipline needed to maintain large Javascript apps before ES6 and Typescript

There's not really a competition between ES6 and Typescript because they are generally on the same side. Typescript is mostly just ES6 (ES2015) + Types. (It also supports ES2016 + Types, ES2017 + Types, and ES2018 + Types, and generally as TC39 proposals make it to at least Stage 3 Typescript adds support for them.)

I'm a fan of functional programming over OOP as such, I don't need all of the OOP constructs just give me some immutable primitives, first-class functions and let's get to work.

There's not much I can do with a class that can't be accomplished with a simple function.

I think I've evolved in my thinking over this in the last several years of experience.

You can continue to code the way you've been enjoying, just with the option to type your inputs and outputs if you choose to. Writing classical OOP isn't imposed on you by TypeScript, nor do you miss out on anything by not writing classical OOP.

Typescript doesn't support typings for functional concepts that libraries like Ramda & Sanctuary use though.

That's also true. TS has more to do to improve working with functional JS.

From what I've seen, yes.

I'm a big fan of ES6 and I haven't been a fan of typed languages for a long time but I'm liking using TypeScript. So are my coworkers.

Since we all tend to live in bubbles to some extent, do we have any numbers to back this up (like, # of repos/commits in each language on GitHub, or questions on StackOverflow)?

I tend to view TypeScript similar to CoffeeScript — it brings to JavaScript some features from other languages that are convenient and preferred by a subset of frontend developers. It allows for experimentation in the language, and helps inform TC39 proposals.

In time, I suspect support will coalesce around a specific TC39 process proposal to add type support, it will graduate to stage 3 or 4, get integrated into browsers and Babel, and enthusiasm around TypeScript will wane.

A flexible type system like that of TypeScript is pretty big and complex. It would take considerable time and effort to get that through TC39, I think. It could happen, but seems far off to me.

For data, 46% of npm's survey respondents used TypeScript: https://blog.npmjs.org/post/180868064080/this-year-in-javasc...

I'd prefer Eiffel or Oberon, but almost anything is better than ES6. I just wish there was some kind of standard library movement to coincide with it, but even MS seems okay with left-pad.

> I just wish there was some kind of standard library movement to coincide with it,

I keep hearing this complaint, and at the same time I'm wondering why it's a big problem. So you can't find 3rd party libraries that are good enough to fill in the void from the lack of a good standard library?

Yes, pretty much. I don’t want to bikeshed elementary parts of the developer experience, never mind vetting all the teeny components or find replacements if dependency hell strikes, quality regresses or maintainers disappear.

Given all the constraints, it’s highly unlikely that you find something consistent.

So you write your own standard library substitutes? Math, functional, etc...?

ES5 is not better than ES6. So I prefer ES6 in that case.

> even MS seems okay with left-pad

That's a strange choice of example to highlight the sparseness of the standard library.

I thought it's a good illustration of how fractured and fragile the JS "library" situation is. It's a bit like the previous PHP standard library, combined with the worst of Perl's Tim Toady.

The reason I say it's a strange example is because of this: https://github.com/tc39/proposal-string-pad-start-end, which was added to the language two years ago (and seemingly very quickly [as tc39 proposals go], I assume due to left-pad)

Noone around me does. Typescript and JSX are the two ugliest tech to become fashionable recently. I suspect C#, or .NET developers find it familiar, and they've just joined the front end wagon.

I've had the opposite experience. I don't know anyone not writing Typescript.

I don't know anyone using it either (and I work for IT in a Fortune 100 company). But there must be since I keep seeing people mention it.

One huge thing we're giving up when moving from JS to TS is iteration speed though. The typescript compiler is not only an additional step, it's also ridiculously slow compared to other languages' compilers.

Disagree. The TypeScript compiler is quite fast even on large projects. Build systems that are unable to properly parallelize builds are common in the JS world these days so you are probably experiencing tsc much slower than it is.

That said, Babel now supports TypeScript so you should get nearly the same performance with TypeScript as you did with Flow or even just plain ES transpilation. That way you can do typechecking as a separate step like you would have done with Flow.

I haven't really ever seriously used flow or babel transpilation. My experience with compiled languages is mostly gcc which has initialized and started compiling in microseconds, or just writing plain JS which doesn't need transpilation. "Nearly the same performance as Flow/ES in Babel" isn't the benchmark I use.

Apples to oranges. You can use TypeScript without ES* -> ES5/ES3 transpilation. The compilation process may not be as fast as not having one, but it will be similar in speed to running a few sed commands over your source code.

GCC startup time is going to be faster than Node.JS startup time, but that's irrelevant. We don't need to restart the compiler on each iteration. You can just use an auto reloader for that.

And since we're real developers writing real apps, hello world is not important to us. We want to organize our code into modules and bundle those to a single JS file that is minified in production. So we're going to process the source code one way or another.

C++ is a funny example to mention because it scales pretty poorly in the compilation time aspect, because it lacks modules. Compiler startup times are good... But it's not really that fun burning all 16 cores trying to compile Qt and still having it take forever. Compare to Go where compilation is so fast most people don't even talk about compilation speed.

All languages compile eventually anyways, even JS. It just compiles in the browser. If you really want a "zero compilation step" experience, you can just load the TypeScript compiler directly in the browser. It's been done in the past for demos.

> We don't need to restart the compiler on each iteration. You can just use an auto reloader for that.

Interesting, I hadn't considered that. I've always just had a build script (usually in package.json), and run that when I change something (either with an inotifywait loop or manually). It does make sense that if you're instead keeping one long-running node.js process instead of spawning a new node.js instance every time, tsc's slow start-up won't matter as much. I'll have to keep that in mind next time I end up doing typescript work.

I have used Angular, using a long-running webpack process which does angular template and typescript compilation, and I found that to be excruciatingly slow even for small changes, but I'm willing to bet that has more to do with Angular than with Typescript.

> C++ is a funny example to mention because it scales pretty poorly in the compilation time aspect

C++'s compile times are horrible, I won't try to defend it - waiting another eight hours for Chromium to compile because you need a build with debug symbols is just horrible. However, most of my time actually working on relatively small C++ code bases is pretty good; compiling each individual file doesn't take very long, and you only recompile the files which have actually changed, and recompiling one C++ file and re-linking the project takes around 0.4 seconds (unless you're doing something stupid like linking in all of webrtc, which we admittedly do for a couple of projects at work). Compiling a typescript file which just contains `console.log("Hello World");` on the other hand takes a bit over a second. (I know compiling hello world isn't very relevant when your compiler is a long-running process instead of a one-shot thing, I'm just including it because that's what my experience with tsc has been until now.)

Do you happen to know of any good resources for how you would run the compiler multiple times from one node.js process? I imagine webpack maybe does it already, but I would be interested to import the compiler as a library or something, and do some testing to see how big of a difference it actually makes to not start a new javascript VM every time.

Webpack indeed can run TS in a loop, using hot reloading. If you want to do this yourself it's fairly trivial, the TS compiler API is at least documented somewhat:


And of course you can read the source code for Webpack's typescript support.

Sidenote: although it's less popular, I highly recommend looking into Parcel Bundler, it's much nicer to use and has no configuration required. You can, for example, point it at an HTML file with a script tag pointing to a TypeScript entrypoint that includes NPM modules and it will handle compiling, bundling, minifying transparently. And it's relatively quick.

classic fallacy of speed lost by dynamic code “i can’t hit save and refresh the page” vs compiled “i can’t make changes without breaking 5 things”

Nah, my day job is writing C++ and most of my side projects are C. I have nothing against compiled languages, and if the compiler is fast enough, just having `make && ./whatever` in my shell history is almost indistinguishable from `node index.js`. It's just that it takes _over a second_ for tsc to compile hello world, and all javascript tooling feels similarly sluggish to me. I have similar issues with java; make starts compiling stuff in microseconds, while grade doesn't spawn a javac process in what feels like forever.

I probably shouldn't have lead with the "not only is it an extra step" thing.

Maybe it's a benefit when moving from Flow, but personally, I would rather have JS projects spend that time adding more tests than converting to static typing.

Edit: changed strong to static.

The two are completely unrelated. When I was writing vanilla JS it felt like half my tests were just focused on ensuring that everything remained the type it was supposed to be. Not in a static type sense but in a duck type sense at least - does this object still have field X? Can I still call function Y on this object? Is this variable still defined?

All those kinds of tests can be eliminated with static typing. It's true that a lot of those types of errors would get picked up in functional tests, but at least from my perspective it's not great practice to rely on a functional test to catch that kind of change. If your functional test changes it's easy to lose coverage of your basic unit tests that were only happening implicitly.

Funny how you say the two are completely unrelated and then go on to show how they are related with your personal experience.

While you're technically correct, this misses the value of static type checking vs type checking in tests. Tests must be written manually, must check every code path and must be updated whenever the underlying implementation changes, in order to achieve the results of static type checking. But types are declarative and will be checked automatically for every code path called.

1. If you dob't have tests covering code, sure type checks are better than nothing. but if you do, why do you also need type checks?

2. If the tests must be updated whenever the underlying implementation changes it might be testing too much- it's better to test behaviour, not implementation.

I was addressing the above comments about tests that specifically check types. My position is that declarative types and a type checker are better because each code path is checked automatically without writing additional tests, and because the type checker automatically adapts without changing or writing new tests when the implementation changes. Testing behavior is another thing entirely.

I'm not missing the value we just have a different idea of what that value is, but anyway, this is a different, more specific topic than the one I originally brought up and one that I don't feel like engaging in.

I think it exactly addresses your original topic, but you're in no way obligated to engage it further. Have a nice day!

Static types are a form of test!

They are a compile time contract at best, they are nothing like a test.

Test and compile time contracts both check invariants.

Abstractly speaking, there's no big conceptual chasm that separates a test in a test suite from a test the compiler makes.

A lot of tests end up testing things which would've been caught by the type system. Static and strong typing doesn't make tests in general unnecessary, but it does mean you don't need quite so many trivial tests.

Half the tests I write for Javascript code are type tests. Typescript does that much better.

They are checking the shape of inputs and outputs, making it unnecessary to test that a function returns a value of a particular type or that this value has particular fields. They make sure you don't make stupid mistakes, leaving it up to you to test really valuable bits of logic.

Static types most definitely do not replace the need for tests!

Agreed! They replace the need for specific kinds of unit tests, e.g. making sure your functions accept and return the correct shapes. You still need to test the actual logic, and should be doing some sort of integration test on the application as a whole.

In retrospect, my original response was too terse. Static typing and testing serve the same purpose, which is to make sure your application runs according to some measure of correctness. We shouldn't be setting up one-or-the-other dichotomies — we can have both!

But it misses the point I was originally making. I was not suggesting they are mutually exclusive.

My interpretation of your point was that unit tests are more valuable than static types, and I don't think that's necessarily true. They're both forms of testing, and it's important to recognize their strengths and weaknesses.

Static types won't help with your application logic, but they will (for example) ensure your function inputs and outputs are the correct types, help document your code and ensure consumers call your code correctly. Unit tests can only do the first one, and it's more verbose and brittle.

Your statement and the one you're responding to are wholly compatible.

They aren’t, and it’s detrimental to think of them that way.

Why do you think so? I think they're better than equivalent tests, and in fact help bring about better testing.

A lot of unit tests will revolve around checking types. If you don't have to runtime-check this, you can focus unit tests on semantics and integrations assuming the underlying data structures are correct.

Just a nitpick, but Typescript adds static typing, not strong typing. The types are still weak because implicit type conversions still exist:

  const x = "one two" + 3;

I think you can make either your typescript or tslint config error out on this at compile time. I think this is configurable.

sounds like their primary goal is 'attract more contributions' rather than 'strengthen the type system'.

I've got to say I'm not a big fan of some of these changes and I think they're biting off more than they can chew.

> Writing posix command lines inside your scripts field will work regardless of the underlying operating system. This is because Berry will ship with a portable posix-like light shell that'll be used by default.

> Scripts will be able to put their arguments anywhere in the command-line (and repeat them if needed) using $@. Similarly, scripts will have access to $1, $2, etc.

If you use either of these features your package.json will no longer work with NPM. Maybe they should call it yarn-package.json?

> Starting from Berry, we made it an explicit goal that each component of our pipeline can be switched to adapt to different install targets. In a way, Yarn will now be a package manager platform as much as a package manager. If you're interested into implementing PHP, Python, Ruby package installers without ever leaving Yarn, please open an issue and we'll help you get started!

Noooo, god no. Package management is a gargantuan, complicated task, and these languages all have their own solutions already.

That being said, it's cool that they're rewriting it in TypeScript.

> If you use either of these features your package.json will no longer work with NPM.

Yes and no. In the case of the `postinstall` script (which might indeed have to run on npm setups) you might want to refrain using those features. In any other case you simply won't use npm if you use them, because those are local scripts that only you and your team will use - and regardless of those features you should all use the same package manager anyway.

> Noooo, god no. Package management is a gargantuan, complicated task, and these languages all have their own solutions already.

We won't spend much time on it ourselves - as you mentioned other solutions exist and we have to pick our fights. Still, I believe this is a necessary move if we want to make our codebase clear and easy to contribute to. It's not so much about Yarn supporting everything than it is about making sure that we don't end up with a monolithic system hard to maintain.

I don't see the need for PHP, because composer is one of the few things I like about it. And I don't know enough about Ruby to have an opinion there.

But something for Python that actually works? Yes, please!

For python we've been using poetry[1] at work. Dep resolution is a bit slow but otherwise it works well enough, definitely a saner choice than pulling in a completely different stack just for one tool.

[1]: https://poetry.eustace.io/

There appears to be a real movement to move from Flow to typescript. Is Flow dying?

I tried Flow and TypeScript out about half a year ago in a client project and Flow was kind of painful and difficult to use, the updates, bug fixes and new features weren't coming as often as TypeScript's were, and maybe 70% of people I talked to were using TypeScript instead of Flow. Since then it's more like 90% now, and the TS team seems to be adding and announcing features faster than before, whereas I haven't heard about any new features in Flow. Maybe they're there, I'm just saying I haven't heard about them. This is all anecdotal but from what I've heard, it's very common. This is a big reason I chose TypeScript in Autumn. Also, VS Code has become the de facto free editor of JavaScript projects with a full set of professional-grade IDE features across the board, and VS Code has first-class support for TypeScript but only plugin-level support for Flow. Since Monaco is simply extracted from VS Code, that means I got all VS Code's IDE features for free when I embedded Monaco into Autumn, all I had to do was set the language to "typescript" and it just works. All these free and serious benefits of TypeScript's ecosystem kind of point to this conclusion that I've seen on HN and elsewhere over and over, that TypeScript has already won and Flow is on its way out.

Facebook (excuse me, bunch of FB employees from the Flow team) says no: https://github.com/facebook/flow/issues/7365#issuecomment-45.... Of course, that's exactly what I'd say if I was in charge of a dying project.

Edit: but, their main codebase is in Flow, and that sounds like a mess to migrate, so I wouldn't be _too_ worried. It might slow down, but I doubt it will become unmaintained anytime before Facebook gets sued out of business :-P

I recalled when Microsoft employees and MVPs denied when Silverlight was dying. :-( I think the TypeScript momentum is too strong now.

I think Flow's in more direct trouble, but I wouldn't announce victory for TypeScript just yet. With wasm around the corner, things may (or may not, shrug) change significantly.

Even with wasm, much as I'm looking forward to the Great JS Purge personally, it's not going to happen anytime soon - too much existing code and devices. JS will remain necessary on the front-end for at least another decade, and probably beyond that. So TS will still be necessary to make the pill less bitter.

Both yarn and jest have announced they're switching to TypeScript. A corporate like Facebook would never allow that unless there was an internal shift in direction.

I'd consider flow dead.

> A corporate like Facebook would never allow that unless there was an internal shift in direction.

Sure they would. Flow has nothing to do with Facebook's business strategy, or its marketing campaigns, or even its corporate strategy. I'd be surprised if anyone on VP or higher even knows what it is.

Insofar as "NIH" wins at big companies, it's when there's a business motive to promote a certain technology or FUD about what other technologies might exist.

But engineering team A deciding not to use engineering team B's tools despite working at the same company? Happens all the time.

Engineering VPs very much know about it.

Well, TypeScript at least is growing enormously. Whether Flow dies or not is up to its authors, but I think TypeScript is pretty much unstoppable today.

The writing has been on the wall for a very [1] very [2] long [3] time. Facebook and some consumers of their ecosystem have been holding the fort, but it was inevitable something like this would happen.

1: https://npm-stat.com/charts.html?package=babel-core&package=...

2: https://trends.google.com/trends/explore?date=today%205-y&q=...

3: https://developers.slashdot.org/story/18/11/25/017227/micros...

I maintain PayPal's cross domain suite of libraries [0] and I'm fully intending to drop Flow for TS. The single thing I'm waiting for is https://github.com/Microsoft/TypeScript/issues/21699 since we use a fair amount of custom jsx rendering [1]. That's the one main thing (for me) that Flow is way stronger at right now.

[0]: https://medium.com/@bluepnume/introducing-paypals-open-sourc...

[1]: https://medium.com/@bluepnume/jsx-is-a-stellar-invention-eve...

There have been several projects that were flow based that have moved to TS. I think the real litmus test will be if React eventually either includes TS types or is rewritten in TS.

Disclaimer: I work with TypeScript professionally.

I'm pretty in tune with what's going on around React, and I don't see that one ever happening.

The React team is _very_ busy already with work around Hooks, Concurrent Mode, and Suspense. There's no way they're going to pause development on implementing all these major chunks of functionality just to rewrite from one type system to another.

I've seen Dan express some frustration with Flow's pace of development on Twitter a couple times, but beyond that, no indications whatsoever that React would be converted to TS. In the entirely hypothetical scenario that React _did_ get rewritten to another language, I have to assume it would be something like ReasonML (which was created by Jordan Walke, the original creator of React).

One of the benefits to TypeScript is that the codebase wouldn't need to be rewritten.

A declaration file(s) could be included. There they could just declare types for all classes, methods, constants, etc. Similar to C header files.

(This is how the DefinitelyTyped repository handles typings for untyped source repositories: https://github.com/DefinitelyTyped/DefinitelyTyped)


JavaScript: app.js

    function app(arg1, arg2, arg3) {
       // does something
       return {
          key1: someStringValue,
          key2: someNumberValue,
TypeScript: app.d.ts

    declare type AppReturnValue = { key1: string, key2: number };
    declare function app(arg1: string, arg2: string[], arg3: boolean): AppReturnValue;

Given that React is already heavily invested in Flow types, there would be _some_ form of rewrite.

And like I said, Flow is providing sufficient benefit for the React team right now, and their focus is on expanding React's capabilities. Changing type systems is not on their radar as far as I know.

Oh— can you point me at a file in particular? I had a look around the codebase and didn't see anything but plain JS. Might have been looking in the wrong places.

And I certainly get your point. However I wonder if they'd consider community-contributed TypeScript declaration files to the official repo as they wouldn't cause conflicts with the Flow system—

Flow types are _everywhere_ in the codebase.

To pick a specific example, here's the file that implements the core logic for the new Hooks feature:


As for the TS typings, there's been lots of agitation from people asking them to be officially included and shipped with React. But, again, the React devs themselves aren't TS users (that I know of), and so they don't have the expertise to write and maintain those typings. Better that they be left over in DefinitelyTyped for the community to maintain.

(I'm a Redux maintainer, and I feel exactly the same way about the typings for React-Redux. I don't have any actual TS experience myself yet, and I couldn't do anything useful in regards to the React-Redux typings. Plus, I've got far too much else on my plate to worry about those.)

I'm curious as to this trend as well. I've read the justifications for it in each case and in this particular one I don't get it.

Isn't Flow more concerned with soundness than intellisense? Has TypeScript caught up with Flow in this regard?

Perhaps it's because I prefer to do my work in strongly typed, pure FP languages where I can but I work professionally with JS and have been investing in Flow there for over a year now. While Flow is still not exactly great it can at least mimic exhaustive pattern matching and catches most unsafe type errors without getting in the way of common JS patterns too much.

The reason the yarn maintainers are giving is because they want more contributors? What advantage will that have if TypeScript isn't catching the errors you used to be able to catch or don't know about yet?

Curious to know if it's worth migrating over to TS without sacrificing anything other than the minor inconvenience.

> Has TypeScript caught up with flow in this regard

I think the answer is: it's very close. It's still a little behind flow on soundness, but it's now close enough (if you enable strict mode, which you should!) that you're unlikely to notice the difference.

The TS type system is very impressive. It can't do everything that the functional languages can do, but it can do some things that they can't, and importantly it's still improving rapidly.

You may be interested in the roadmap/changelog page: https://github.com/Microsoft/TypeScript/wiki/Roadmap

Does anyone know of any "third choice" around typing JavaScript? I would love to add types to my code, but I want to write "real" JavaScript: so the code I input is the code that is executed by the browser. I just want the compile step to strip away the type annotations.

There was initially talk of Flow using comments to actually work without touching the source code at all, but I don't think anything came of that... is there anything else on the horizon?

[Edit: Nevermind: I went looking for the github issue about adding types as comments, and it turns out it's already supported by flow: https://flow.org/en/docs/types/comments/ - is there anything like this for TypeScript?]

Yes, TypeScript can do type checking on regular Javascript (annotated with comments) as well: https://www.typescriptlang.org/docs/handbook/type-checking-j...

> I would love to add types to my code, but I want to write "real" JavaScript: so the code I input is the code that is executed by the browser. I just want the compile step to strip away the type annotations.

That's what Typescript is. Type annotations don't change your code, they're stripped away by babel at compile time. In fact, by default, tsc does compile TS code which fails typechecking, into valid JS code (which will likely error when you use it at least in some circumstances, but can run just fine).

Typescript is a clear superset of JS, so there's no actual logical changes or even syntactical changes done by the compiler. However, you may be confused by the often-used "compilation target" features of babel (and tsc), which is that you may write ES2018 code, target ES5, and have your ES2018 syntactical sugar be turned into ES5-compatible code. This is entirely opt-in, and not related to typescript (other than the fact that Typescript is always compatible with the latest ES spec, so you can use any legal ES2018 syntax in it).

Hope that clears it up…

This is where I see a big difference between Flow / TS.

With Flow, the type annotations are just stripped away, none of your Flow code affects runtime code.

This is not so with TS since you have things like Enums, which will be compiled into objects and are part of your runtime code.

If you write type annotations in TS, they're stripped away.

If you write type annotations in Flow, they're stripped away.

There's no difference. TS has some additional features which are purely optional that don't get purely stripped out, but you're not mandated to use them.

There is no difference if you limit the use of the tool to not use these features.

In my eyes, it's a philosophical difference between the two, Flow can be more easily integrated into an existing codebase by just adding //@flow at the top the file and has no features which can affect runtime code. Whereas TypeScript tries to be a different language altogether that uses a new file extension, adds new features, and has its own compiler.

When you can implement a tool like this using TS, let me know https://github.com/flowtype/flow-remove-types

Typescript doesn't try to be a different language. It tries to be an exact superset of JS: JS with annotations. And you can absolutely implement what you linked, it exists and it's called tsc: the "compiler" you're somehow upset Typescript has (this + checking types is all it does!). Babel does it natively.

Enums are a fair point, those aren't in the ES specs (though I suspect at some point they will be). However, they're an incredible addition and they really are just syntactic sugar for a more complex type of object.

BTW, typescript supports jsdoc-style annotations, and --allowjs even lets it typecheck javascript code. It also supports more, because nobody actually only wants those things; they're not that great on their own.

Not sure where you got the idea that I'm upset that TS has a compiler, I'm just pointing out my perspective on the differences between the two tools. I use both on a daily basis.

You might not think the differences is a big deal, but affecting runtime code is a pretty major line to cross. Not that there is anything inherently wrong with that, but at that point it becomes a different tool, in my opinion.

No I get you, being able to map the exact code is important. But I think what's happening is you're confused about the ES20XX translation layers. Those aren't at the typescript level, they're at the Babel level … it just so happens that tsc supports that bit, but it's my understanding that this will be going away at some point (eg. typescript will move to only doing the typechecking, and leaving babel to only do the compiling).

enums are a tiny, extremely useful and extremely optional part of the language and they don't warrant this label of "philosophical difference", IMO.

I think Babel does excatly that if only enable the typescript-preset.

Is there any compiler option or linter config that prevents the use of these extra features?

tslint has no-namespace and no-enum to disallow the two things that create new runtime code

TypeScript works with JSDoc annotations: https://github.com/Microsoft/TypeScript/wiki/JsDoc-support-i...

Can confirm, have sometimes added /* * @type Foo */ above a type and VS Code assumed that type was Foo everywhere else in my project, as if I typed `foo: Foo` directly. Didn't even have TypeScript in the project itself, purely an IDE feature. Extremely well thought out and useful ecosystem.

If you're using Babel for compatibility with particular browsers then that can already handle TypeScript. What distinction are you drawing between a compile step that "strips away the type annotations" and one that does something else - what else is it that you consider compiling TypeScript to include?

I have to ask what you're trying to achieve here. Moving types into comments just gives people who build without your typechecker the chance to break your code.

The comment stripping is optional (just for saving bytes on download). The goal is moving away from Babel, and any other unnecessary transpiling steps. Now that JavaScript has (a semi-working) module system, I find my projects having fewer and fewer dot files and far fewer dependencies.

If I can have a folder of plain JS that I know will work in a browser 20 years from now without having to resurrect an ancient/abandoned toolchain, then I'll do that!

Are you gonna hardcode all the HTML and CSS as well? I would expect resurrecting a 100% compatible toolchain for a mainstream source format 20 years from now will be easier than resurrecting a 100% compatible browser. Especially if that format is a stricter/less ambiguous one like typescript.

Not exactly what you're asking, but a "third choice" in general to typed ECMAscript has been ActionScript. A quick google found a transpiler project, but it looks to be abandoned: https://github.com/Cleod9/as3js

Still, I once did Flex development (would compile to a SWF file, or with Adobe Air to a native executable). It was a pleasure -- so long as you used the Flex/Flash Builder IDE (which was itself built on top of Eclipse). Trying to develop in vim was harder, though mainly because of the MXML part, but I still liked MXML better than HTML. Later at the same job I did Node and found that pretty enjoyable too, plus I could use vim all the time.

Tooling is always a concern with a language. I think dynamically typed languages are partially successful because they let you get away with so much less tooling. Though not all dynamic languages are equal in the tooling they can trivially support, e.g. Common Lisp with Slime enables all the usual stuff (who calls x, who sets y, who specializes method z...) and you can use Slime from a variety of other tools.

Flow really shines when you have a large, untyped codebase and want to incrementally add types to it. If you're starting a new project (or rewriting one) TypeScript is the more sensible choice. By nature Flow is going to be used less and less over time.

For people now thinking well we have a large untyped codebase..damn.

You can turn typescripts type inference on on a regular js file.


> your previous yarn.lock will be silently migrated

I hope that's not that silent, because at that point, everybody who works on that project will have to upgrade as well.

That said, shipping the light-weight POSIX-like shell will make it a lot easier for scripts to be multiplatform. That's the improvement I'm looking forward to most.

We'll make sure to add a notice at runtime to make it clear (plus, a consequent diff at review).

Also note that we recommend using `yarn policies set-version` to enforce the version of Yarn used by everyone in your team with very little friction:


There's a feature I didn't know about - that's enormously useful, thanks! Now to remember applying that to all my different repo's...

For the uninitiated / confused, this refers to yarnpkg, the JavaScript dependency manager, not (Hadoop) YARN, the cluster manager.

I'm curious as to why yarn instead of contributing to NPM? I am aware that yarn was the inspiration for many improvements for NPM by providing an alternative, but going forward do we need two systems? Is the plan for yarn to be compatible with NPM and package.json?

From what I've heard, the Yarn codebase is much cleaner than NPM. If anything, I think it would make more sense to migrate NPM over to yarn

Yes, having competition here has clearly benefited the ecosystem.

I wrote a recent comment comparing the history and goals of Yarn and NPM:


I've been using both for a while and I'm happy with their decision not to try and be "compatible" with NPM. I'm also pretty happy with using Yarn only - things are just faster and in reality make a little bit more sense.

The addition of vulnerability scanning was the only reason our company switched back to npm from yarn. Other than that, yarn offers a great experience

Yarn has this too (although it uses the NPM audit database): `yarn audit`.

Oh, I didn't know that! Here's some resources about it if you haven't heard of it either:

documentation; https://yarnpkg.com/lang/en/docs/cli/audit/

original feature issue: https://github.com/yarnpkg/yarn/issues/5808

release comment in that issue: https://github.com/yarnpkg/yarn/issues/5808#issuecomment-441...

> Writing posix command lines inside your scripts field will work regardless of the underlying operating system

That is very nice. No need to install other dependencies just to do 'rm -rf'

Note that this is mostly about the command line syntax, not so much the commands themselves which will be executed just like now.

That being said, maybe we'll offer some builtin as well (possibly in a similar way to what CMake offers?[1]). That would be worth an RFC later on :)

[1] https://cmake.org/cmake/help/v3.2/manual/cmake.1.html#comman...

Since it seems the devs are here answering questions:

Which lightweight shell will be used on Windows? Does it also bundle standard unix tools (if a script pipes to grep or less for example)?

How will paths be translated on Windows? I’ve attempted something similar recently and had to do a fair amount of regex magic + using cygwins built in path translation utility to preprocess commands. Curious to see if there’s a better way to solve that.

> Which lightweight shell will be used on Windows? Does it also bundle standard unix tools (if a script pipes to grep or less for example)?

It will be in-house, and very basic. We don't intend to rewrite bash, just to provide the basic experience that is usually needed when adding script into the `scripts` field. For more complex needs we'll simply offer a way to opt-out and use the native shell, or to call Node scripts.

> How will paths be translated on Windows?

The current Yarn tries to do this by using the `path` native module. It's quite error-prone since backslashes tend to appear in the worst possible places. For the v2 I plan to work with all paths in a posix style, and convert them into Windows paths right before they reach the filesystem (which is similar to what Cygwin does, as you mentioned). It would be a bit slower on Windows, but massively simpler in the codebase.

I'm assuming that lifecycle scripts (and scripts called by lifecycle scripts) in particular will still need to use common Windows-supported syntax? Even if the devs of the package are guaranteed to be using Yarn, people installing the package might still be using npm. So I assume some caveats apply to some scripts, right?

p.s. I love Yarn :)

The `postinstall` scripts would likely be better off without using those features, indeed. But in the end, your packages would be better off without `postinstall` scripts anyway ;)

Yeah I don't recommend `postinstall`, but `prepare` (and all the potential build-type scripts it could run) is actually useful (esp. for allowing people to install from unpublished versions via git), and would need special consideration. :(

If you use the yarn cli and have tried the npm cli recently, why do you still use yarn? Are there big gaps that you find that NPM has failed to close?

Unfortunately npm is probably the biggest source of my daily development frustrations, even on the latest version.

I still come across bugs (that are definitely in npm itself) that have been around absolutely forever, like:

> npm ERR! cb() never called!

It sometimes gets its primary purpose, dependency resolution, wrong. I'll give it a perfectly reasonable package.json to install, which it will do, and then `npm ls` will still error with "missing dependency!" in some package. This should not be possible.

Related, it will put packages from the flattened tree in the wrong place. I can have a dependency (that other dependencies need, specified in their peerDependencies) specified in my top-level package.json and bafflingly, npm will still move it from top-level node_modules into the node_modules of something else that happens to use it, breaking the peerDependencies I was trying to satisfy.

On the install process: even if the total time to install is roughly on par with Yarn, I find that Yarn is much smoother. Whatever they're doing, they're yielding the CPU a lot more, and the result is I can actually work while it installs. npm meanwhile doesn't yield much during install and slows the whole system to a crawl.

Some npm commands are extremely neglected. The "success" message printed by one of the user/permissions related commands is simply: {}

Lastly, I will leave this terrifying comment here: https://github.com/npm/npm/issues/16528#issuecomment-3075400... – note this comment was left 2–3 years after receiving $10M in funding.

I don't blame them for being one-upped by Yarn at every turn, but they still fail to get the basics right, let alone innovate on things.

Kind of ridiculous, but yarn's CLI just feels better. It's a bit simpler, `install` doesn't really seem like an "add to package manifest" command word, there's the `global` subcommand for managing global command-line tools vs `-g`/`--global`, and no command aliases, which I like more for some reason. For me it's mostly personal preference; ergonomics would be the key difference for me. Plus, the upcoming stuff mentioned in this issue has got me excited!

Edit: also, in my experience using npm for some little stuff recently, yarn is still faster installing packages.

Edit2: I also like the ability to run scripts/commands from the base level of command, e.g. `yarn start`, `yarn webpack --mode production`, `yarn build:web`

If package.json is already in JSON format, why not use the same for yarn.lock? Honest question, there must be a good reason.

We want the lockfile to be easy to review by humans. In our experience, JSON doesn't quite fit the bill once you reach a critical mass of data.

Our lockfile format worked fine for the past three years (bar the unfortunate YAML incompatibilities that we're about to fix). Don't fix what isn't broken :)

Too bad it's not package.yaml or package.js so that we can include comments and document our dependencies....

serious question, if you are starting a new project today why would you choose to use npm over yarn?

Experience at the agency I worked at until a couple months ago, which had continually followed this pattern with projects started after the yarn/npm split got serious:

1) Start or inherit a project using Yarn because that's what all the cool kids are using.

2) Develop for a while, everything's fine.

3) Lose a whole day when you eventually stumble on a Yarn bug or missing feature in Yarn.

4) Angrily switch the project to NPM, solving the problem immediately.

5) Continue developing as usual.

I think this happened on like half a dozen projects after the yarn/npm split occurred, and I saw it happen as recently as Fall 2018. Developers leaned trend-chasing there, but after being burned several times even the trend-chasier ones were tepid on yarn and tended to accept that if they started with it they'd probably end up switching before long.

FWIW I don't like npm much, but I still default to it so I don't, inevitably, hit the above situation at some point. Yarn doesn't provide enough benefits to make it worth having an even less-trustworthy tool in my build process than npm already is.

[EDIT] TL;DR: We all got sick of ending up on open GH issues for Yarn when trying to track down build and, worse still, run-time problems. So we started favoring NPM again as the lower (though far from zero) headache option.

Because it comes standard with most node distributions and is pretty good.

It coming standard means one fewer dependency for everyone on the team to install, which is important on my current team where roughly half are backend- or mobile-only.

I like both yarn and npm, and yarn would have some benefits for us, but probably not enough to counter the extra effort onboarding other devs. npm still has some pain points, but I’ve ran into a few pain points on yarn too. And I think there being multiple projects here has helped the ecosystem.

npm is also doing some neat feature development (npm audit).

Also, the npm team is fantastic, whereas when I adopted yarn early on they dismissed the issue I opened about yarn not working for my setup. They’re a good team, but that lowered my confidence that I’d be able to get through any obstacles while using it. It’s still a fantastic tool though.)

Because npm is a standard (default) package management tool that comes with node.

After getting `package-lock.json` and `npm ci`, I would rather wonder why choose yarn instead of npm.

"npm ci" deletes node_modules directory every time. Try using that when you depend on some native modules. :(

I thought `npm ci` is more suited for ci machines, where it's crucial that packages are pinned to specific versions. For local development, I just npm install. Perhaps with no-save option in order to avoid updating package-lock. Works fine.

Because I don’t trust it. Had to do rm -rf node_modules too many times with npm.

Yarn has over 1500 open bugs. Rather than working on changes, it'd be nice to stop and address these.

Most of those have been fixed a long time ago, but we simply don't have the resources to triage them.

This effort we start is in no small part to decrease the number of issues that will be created by empowering the users to unblock themselves* and solidifying Yarn's codebase.

* You wouldn't believe the number of issues that are simply about things working as they should - we can't really blame their authors because it can be quite hard to find the right paragraph in the documentation, but it's extremely taxing on a small team. Similarly, we often have issues created against older releases, or without reproducible test case.

I noticed the issue count while using yarn for the first time yesterday. It encountered a fatal error (Disk Quota Exceeded) and then proceeded blithely on. It's only one data point, but doesn't inspire confidence.

And npm had 2166 at the time they archived their old GitHub repo, moved over to npm-cli (with issues disabled), and have all their issues in a Discourse forum, so you can't get an overall count, nor any connection to PRs and commits.

So I'm not sure you can glean too much from just looking at an issue count in isolation.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact