Hacker News new | past | comments | ask | show | jobs | submit login
Google Feedback on TypeScript 3.5 (github.com)
396 points by dkns 14 days ago | hide | past | web | favorite | 145 comments



> (I might suggest the underlying problem in this code is relying on inference too much, but the threshold for "too much" is difficult to communicate to users.)

This is a very outstandingly interesting line out the whole writeup.

I like the writeup in its entirety for being very balanced and thoughtful, but this line in particular really stands out to me as worth more thought for anyone interested in language and type system design.

Inference is great.

Except... when it's not. When it's "too much". When it starts making breaking changes appear "too distantly".

It's an interesting topic to reflect upon, because the inference isn't making a breakage; just shifting around where the breakage appears. (And this makes it hard to ever direct criticism at an inference mechanism!) But this kind of shifting-around of the breakage appearance can be a drastic impact on the ergonomics of handling it: how early it's detected, how close to the important change site tooling will be able to point the coder, etc. That's important. And almost everything about it involves the word "too" -- which means the area is incredibly subjective, and requires some kind of norm-building... while not necessarily providing any clear place for that norm-building to center around.

I don't have a point here other than to say this is interesting to reflect on. I suspect the last chapter on type inference systems has not yet been written. Can an inference system be designed such that it naturally restrains "too" much use of it?


Thanks for highlighting this bit, I also find it very interesting to think about!

Here's a related property I've also discovered of 'advanced' type systems.

In my understanding of how unification happens in systems like H-M, you first give every expression its own type variable, then perform a search to produce assignments to those that remain valid and leave the remainders generic.

But as your type system gets fancier, and particularly in the presence of union types and subtyping, this search process can feel like "make whatever type judgements are necessary to make this program still compile". E.g. in code like

    let x = new Set();
    x.add('hello');
    x.add(3);  // oops, meant to add the string '3'
    f(x);
The inferencer can just reason "oh apparently x is Set<number|string>", and "oh apparently f() operates over all kinds of sets, Set<{}>". And especially when there's never a more specific type for things to "bottom out" on (like say f just forwards the set elements to JSON.stringify(), which accepts both string and number already), nothing will ever reveal to you that you actually wrote a bug.

But then meanwhile even in Haskell you run into cases where the inferencer wasn't generic enough, like https://wiki.haskell.org/Monomorphism_restriction , so it's not even clear cut that you want the inferencer to be smarter or dumber. As my coworker says: as a programmer you have to be able to basically run the inferencer in your head, and that becomes very hard when the inferencer gets very smart. (See the RxJS bug in the above blog post.)


That problem has a simple solution: you, the programmer, should write type signatures in appropriate places as a combination of what documentation and assert statements do in other languages. No compiler today could be reasonably expected to guess which inferences the human brain will find surprising, which is why it's up to the programmer.

This is the point under discussion though -- humans think they're reasoning the same way as the inferencer but can't get it right in the limit, so they're not really equipped to know where they ought to have added type annotations. See e.g. the comment here about the inferencer working 'backwards': https://github.com/ReactiveX/rxjs/issues/4959#issuecomment-5... .

You add the types on every function definition, and almost never have a problem.

Many people forget that the point of type inference is to eliminate uselessly redundant clutter, not to completely eliminate types from source code. A function is usually called many times, so keeping types on public API definitions and eliminating types from call sites and throwaway REPL code eliminates a huge amount of wasted typing (hah!) with near no loss of clarity.


In the case of an RxJS pipeline, as one example, you often have a huge variety of lambda functions that may indeed be called many times, but often exist themselves as "one off" pieces in a pipeline. Leaving the types to be inferred and/or generic, is sometimes even necessary for the most code-reuse of functions between pipelines. It's also rare for such functions to feel like they are a part of a public API definition, because they feel so huge a part of internal and boring plumbing.

As someone that has done a lot of RxJS (and kindred libraries) programming over the last few years, I have a lot of sympathy for anyone hurt by type inference changes in the middle of those pipelines. Some of my ugliest rats nests of explicit any types and/or worse have been buried in RxJS pipelines next to // TODO and // HACK marker warnings to future spelunkers (myself included) about how much I was fighting the inferencing engine and losing in that particular combo of Typescript version and available typings. It's usually great when I can fix those later, and so far Typescript continues to move in that direction where those rats nests only get cleaner when I get a chance to revisit them. (But again, I sympathize with that existential fear that Typescript inferencing magic can sometimes take an unexpected step back until you understand why the change was made and how it benefits you elsewhere and/or once you figure out the root cause.)


> appropriate places This

This seems like a good task for a linter and some commit hooks.

During development, I like to keep my types unspecified until I'm confident that things have congealed properly. But before merging, I like to make sure and add type annotations to anything that's meant for public consumption.

Partially for readability and documentation. Partially because adding those annotations ensures that the type checker and I are on the same page. But mostly because those manual type annotations represent a promise that I, the programmer, have made about the current and future behavior of a certain chunk of code. Once those promises have been codified, then the type checker has my back and can help warn me if I'm about to break one of them with a careless change at some point in the future.


IIRC there are a couple of Rust write ups about this.

In Rust, inference is limited within functions. The language doesn't allow inferring the argument or return types of functions to avoid this kind of action at a distance.

The function API is just one of the many arbitrary places where one can limit inference and require users to provide types. In other languages the module boundary might also be an appropriate place to do that.


A big part of TS is the ease of transition from JS where you immediately get benefits through gradual typing and return type inference is probably a big part of that.

This is true, but a lot of projects are now starting with TS, and maybe could benefit from intentional limitations on inference, at least locally. I’m pretty sure most folks start new projects with strict enabled as is.

In my experience a lot of people don’t necessarily start projects with strict mode — and they _really really_ should ... most people I’ve talked to who couldn’t figure out how to use typescript never figured things out because they ended up somehow with a compiler configured with noImplicitAny: false — typescript really needs to find a way to change the default here ...

Yes, my advice is almost always strict should be true for every new project, and even sometimes for migration projects where the end goal is to absolutely decrease tech debt and bugs as quickly as possible and the team isn't afraid of a giant compiler error waterfall as motivation. (It's a nice auto-generated TODO list with some semblance of progress reporting!) But at the very least, everyone should do themselves the favor and "noImplicitAny: true" no matter if it's new, migration, something inbetween, and no matter what they think of any of the other strict flags. Explicit anys are searchable TODO markers and fine, but implicit anys always seem to be hidden bugs waiting to be found.

Sure. But if you write some code (or update a dependency) and it breaks with spooky action at a distance, the first thing you should do is to pay off your tech debt and add type signatures to find the breakage, before thinking about changing TS.

The github comment says: " (I might suggest the underlying problem in this code is relying on inference too much, but the threshold for "too much" is difficult to communicate to users.)"

The first part is exactly right; the second part is well established in the Rust language spec and Haskell style guides like http://www.cis.upenn.edu/~cis552/current/styleguide.html and https://kowainik.github.io/posts/2019-02-06-style-guide#func...


The compiler should be capable of inferring the type as far as reasonably possible. But it should be an error to not specific the types of top level functions and class methods.

This way, when you write

    const f = (a, b) => a + b
the compiler can tell you to write:

    const f => <T extends number|string>(a: T, b: T): T => a + b
or

    const f : <T extends number|string>(a: T, b: T) => T
            = (a, b) => a + b
This would be effectively like GHC's typeholes, where the compiler will tell you what to write when you say

    f :: _
    f a b = a <> b
except that rather than being an optional step by a developer, it's a mandatory standard.

(Well, I don't think typescript infers that type for that function, but whatever it actually infers, that's what should be there in the error message.)


I don't think typeholes is necessary for that feature. It suffices to just use type check up to the function signature. If I write:

    fn foo() { 42 } 
in Rust, I get an error that says that `i32` (the type of `42`) is not of type `()` (unit), which is the return type of `foo`. So I can just change it to:

    fn foo() -> i32 { 42 } 
instead. The same applies for generics, if I write:

    fn bar<T>(x: T, y: T) -> T { x + y }
I get the precise error that T does not implement the `Add<T>` trait. In this case the error is precise because there is only a trait in scope that supports `+`, but once there are many traits that would fit, which is rare, the compiler suggests some (often all of them). With that error I can just change that to:

    fn bar<T: Add>(x: T, y: T) -> T { x + y }
This is all done through local type inference within the function.

Same choice Java made.

This sounds like a good tradeoff. Perhaps this could be added to future versions of TS with a flag, if it's not already in there.

I think the underlying problem is that in cases of long inference chains, it's hard to detect where the problem lies if types on both ends don't match. In my experience, the compiler will just show an error on the "latter" end, then the user needs to trace inference back and figure out where things went wrong.

Personally, I experience this quite often, e.g. in Java whenever arrow functions are heavily used, such as with streams.

It would be nice to have compilers report the whole inference chain somehow, just to be clear about all possible spots where things might be wrong. But I suppose that's difficult to visualize in a well human-readable way.


Some IDEs show type hints on hover, or if type mismatch is detected, they display inferred types inline in code. E.g Intellij Idea does that for Scala and it is really useful. I wish it did that in Java as well.

> But I suppose that's difficult to visualize in a well human-readable way.

It's not. The type-checker must have constructed a call stack of inferred types in order to find the violation, so it simply has to print all the calls (source line and line number) and the types it deduced, and let the programmer (or IDE) compare that to the code in context and look for surprises.


Typically my strategy is:

1) Make types explicit (or generic) at any interface boundary

2) Periodically query the inferred types of things from my editor just to make sure they come out to what I expect

3) If an error message feels like it's in the wrong place or an inferred type isn't what it should be, progressively make things more explicit until the problem is resolved

But you're right, that's all very subjective and dependent on learned norms.


This is the same strategy I use for writing tests. This is natural, because types are compile-time tests.

> because the inference isn't making a breakage; just shifting around where the breakage appears

That's a very valuable insight, and a concern in a lot of languages with advanced inference!

The problem is exacerbated in Typescript though because it is fundamentally a practical, evolved layer over Javascript, with an unsound type system that likes to bail out to any.


>this code is relying on inference too much

Is this some TypeScript joke I'm too Hindley-Milner to understand?


Not sure. Once upon a time I wrote a small program in Haskell for an exercise in cryptography. It defaulted to Word8 for all arithmetic throughout the program because some of the operands came from a ByteString. Scary. Maybe you are less too Hindley-Milner and more following the advice of explicit types on at least all top-level declarations?

Could be, yes :)

The real problem with shifting where it appears is when the error message is not helpful. With good error messages, it's not enough of a problem to justify leaving type inference in my opinion.

Just to clarify: you're suggesting I believe that the programmer should be advised by the compiler to add one or two explicit type declarations to certain programs, not that the compiler should refuse to compile any programs that currently compile.

It should be a compiler strictness flag.

I also found it interesting because just as the later sections complain about cases where they relied on inferencing "too much", the earliest section also shows cases where they relied on inferencing "not enough". (Had the developer not explicitly typed by copying the inferred type from a tooltip for that D3.js return object, for instance, and left it to be inferred, it wouldn't have been broken in the generics change.)

It's an interesting goldilocks paradox likely only exacerbated by the size of Google's codebase and variety of developer code styles/inferencing preferences.


In my experience with type inference (quite a bit of mypy, now learning rust), inferring local variable types based on what's assigned to it is not very confusing, as long as me or the IDE can figure out what type is being assigned. But when a function return value or variable's generic argument is determined by where the value is used, I tend to have more trouble (since one type can propagate from one function, to its argument, to the function which generated the argument).

They pain of "unexpected any" they have doesn't come just from the type inference, but from gradual typing that makes it "too much". Without unknown/any being allowed to spread everywhere, the inference would have failed sooner, and the error/confusion would be contained (e.g. Rust stops on an ambiguous method call, plus limits implicit inference to function boundaries).

I feel this is a lesson every Haskell programmer goes through. Most Haskell programmers I know write types for all of the top-level declarations. Although it usually isn't necessary, it makes drastic improvements to the quality of error messages.

> Although it usually isn't necessary, it makes drastic improvements to the quality of error messages.

It also provides valuable documentation. You can usually work out what a function does from just its type (and quite often, that's all you have to work with).


> just its type (and quite often, that's all you have to work with).

which is a weakness in the culture of Haskell library authors and/or the ecosystem for contributing documentation patches.


Couldn't agree more. My recent experiences with Rust and especially Elixir have set high standards.

You should have let other people highlight this bit.

Typescript is absolutely amazing. I've been working with it for the last 8 months.

https://getpolarized.io/

and the source is here:

https://github.com/burtonator/polar-bookshelf

I could have NOT made as much progress just by using JS directly. When you have a large code-base and you're trying to make progress as fast as possible refactoring is needed and the strict typing is invaluable.

Honestly, the MAIN issue with TS are issues around webpack + typings for 3rd party and older modules.

I'd say 85% of the code I want to use already has types but when they don't it's frustrating to have to pause and write my own types.

I have 20 years of Java experience. Used it since 1.0 and for the most part have been unhappy with everything else.

I've decided that Node + Typescript is by far the most productive environment for me to switch to. I can code elegant apps with both front and backends and I get strict typing.

Could NOT have made so much progress without TS.


I'm writing this as a developer who writes a lot of C# and Typescript: Have you tried C#? Even though I sometimes miss the flexibility of Typescript when writing in it, I love the reliability. Maybe it's no as battle-tested as Java (especially with all the rewrites recently), but feels like it's 99.9% there.

Not GP, but as a developer working with TypeScript and C#, I respectfully disagree regarding the type system. There are many things I like better in C# compared to JavaScript, but I feel overly constrained by the type system of C# way more often, and most notable is the lack of discriminated unions (aka sum types etc). You can say that something has this and that, but not that it is this or that.

I agree, as a mostly C# developer for years, then Typescript (Node, Angular and React) for the last couple of years, when I go back to C# projects, I feel like I'm doing a lot of work for the compiler. And really elegant use of the TS type system doesn't translate well into C# many times, forcing me to write boilerplate.

I've been eyeing F# for more elegant managed code, but that's going to take me a little more up-front investment to get productive.


To be clear, are you using TypeScript for both frontend and backend?

Are you using Angular as your frontend?


Not the parent here, but I use TypeScript with React day to day, as well as with node.

I feel a lot less productive without it and the interaction with React has been perfect in my view for the last couple of months.

With hooks allowing a lot more inference in typing and removing most need for Higher Order Components, there aren’t any places where the typing feels like a big impediment. It’s definitely a lot better than it was, say a year ago.


This write-up led me to Evan Martins blog. What a gold mine. Digestible writing with a peek inside of Google's internals. http://neugierig.org/software/blog/

I love TypeScript. I started using it around v1.0. Microsoft has hit some gold with it.

When I first started using it I had lots of `any` in my code (like the Google employee is describing here). But over time it really starts being extremely clean.


I started around 0.7/0.8, it was already such a fantastic product back then. It's just fantastic, Javascript on steroids.

I don't have a significant amount of experience with TypeScript, but even from my limited experience I agree that it's a fantastic product. That said, at $JOB we (unfortunately) have a fair amount of production code written in TypeScript 0.9 which nobody has ever upgraded, and simply won't compile on a more recent version. It's been that way for years now, and every attempt to bring it up to date has been met with failure.

It may be an overly broad request, but I'd be very interested if anyone had any suggestions for how we might go about performing such an upgrade in an iterative manner. My understanding is that pre-1.0, TypeScript went through a number of breaking changes (as would be expected of any pre-release software), but I've never found a complete list of what those breaking changes were.


You can see a list of all changes in the "What's New in TypeScript" document on GitHub:

https://github.com/Microsoft/TypeScript/wiki/What's-new-in-T...

And a list of breaking changes here:

https://github.com/microsoft/TypeScript/wiki/Breaking-Change...

They both go back until v1.1. For older changes you can check the blog:

https://devblogs.microsoft.com/typescript/announcing-typescr... https://devblogs.microsoft.com/typescript/announcing-typescr...

Anyway, it's hard to say what you need to do to get your code to compile on the latest TypeScript version without knowing what some common compile errors are in your code.

Here are some guesses:

1. The way you obtain type definitions for third-party packages has changed. It used to work with /// <reference> and tools like nuget or TSD (TypeScript definition manager). But now it works with npm in the @types namespace and some npm packages even include definitions themselves now.

Of course, updating to the latest type definitions would mean that you have to upgrade to the latest versions of the third-party libraries as well. Otherwise you have to find a way to keep working with the old definitions for the respective library versions.

2. Pre-1.5 TypeScript had something called "internal modules" which were renamed to "namespaces" and are generally discouraged now. Hopefully your code is not using that feature.

https://www.typescriptlang.org/docs/handbook/namespaces.html

3. tsconfig.json. I don't remember when they introduced it, but you likely need to create this file and tweak the settings


> 2. Pre-1.5 TypeScript had something called "internal modules" which were renamed to "namespaces" and are generally discouraged now. Hopefully your code is not using that feature.

Related to this, the TypeScript pre-1.0 import/export syntax for "external modules" was grandfathered in, but the semantics changed alongside the "internal modules" changes in 1.5. So pre-1.0 import/exports still mostly "compile" in post-1.5, but have very different semantics and that can often cause a lot of downstream module shape headaches, depending (and exacerbated by) the module format you are targeting.

A first pass making sure to rewrite all import/require/export statements can save you a lot of work later, even though it won't directly cause compile errors. You can get some compile errors by making sure you try to compile to a "proper" post-ES2015 module format (--module esm or --module es2015 or --module esnext even). There's also tslint rules such as "no-import-requires".


So, because it currently "works", it should "safely" compile by adding many `any`s. (No Implicit Any, and other strictness flags set to false.)

Then it's a matter of cleaning them up.

But usually, at least in my experience, the problem is the libraries and their API, which used to be very much full of any in those dark times.


Issues like these are the reason we can thrive as a community. Not playing a blame-game and offering direct feedback to a wonderful open source project. Trying to get my own company to get more involved in feedback to the open source tools we use as I think it is so extremely respectful and encouraging!

A little off-topic:

I'm currently trying to find a way to write (type safe) business logic once, then reuse it pretty much everywhere (mobile / web / desktop),and it seems to me that typescript has become the only option. Javascript runtime is present everywhere, and can interface with anything.

Does someone knows of another alternative (viable right now, or in the coming months) ? I know llvm can theoretically target any platform, including wasm, but how painfull is it in practice ? Can you write a line of code that does a network request then expect it to run as it is on the browser and on mobile platforms ?


Dart is one option, and Flutter means that you can share UI code as well. Dart is a fundamentally boring language (in a good way). If you have any experience of OO langauges it will take about 10 minutes to pick up. I missed union types and pattern matching from FP, but otherwise it works well.

C# (with .NET Core 3.0 officially launching on the 23rd of September, Blazor (WASM) support is included.) I have been playing around with Blazor a bit with the preview releases and I have to say it's pretty slick.

Does JavascriptCore support running wasm (on ios and android) ?

Also, as i mentioned in another answer, i'm not only concerned about being able to run the code on another platform, but also having it run on an "friendly" environment (with some kind of cross-platform I/O api).


Did they improve the download size?

No and on that basis it's unusable. However that's because it downloads the whole runtime. Eventually MS intend to load only your compiled code.

You can avoid that issue with the Server Side Blazor concept for business side portions of your application.

Depending on the application you can get extremely far with just using HTTP requests to an API for anything business logic related. A lot of apps over complicate themselves by trying to force a JS framework on the front end with no benefit to the user. I seem to prefer using non-SPAs these days over SPAs because very few places do them well. Google being a terrible SPA developer.

- Edit - I forgot to mention Elixir's Phoenix has LiveView which is similar to Server Side Blazor.


As I understood Blazor as compiled business logic rather than a bundled runtime is due in a future release of .Net Core. Not sure if it's ready for 3.0 later this month but that will be the game-changer Blazor promises to be. Yes, Phoenix Live View is similar but unfortunately Elixir hasn't really captured sufficient mindshare for it to have an impact.

I would not be so categorical.

I have a data web-app heavy app where users will routinely view over 300MB of data in charts and photos. In this context, .net runtime is insignificant. In fact, you could could argue such data intensive applications should be native and use local storage. Lastly, the browser with cache .Net runtime, and won't download it very often.


Fable (https://fable.io) is perhaps another option - it compiles F# to JavaScript.

If you do go with TypeScript have a look at Nx. It's tooling around TypeScript monorepo that helps you do exactly what you want: write code once and share it between backend/multiple frontends. It suppports Express or NestJS in the backend and Angular, React, Web Components in the Frontend. https://nx.dev/

If you want to make it work with Ionic/NativeScript/Electron as well use xplat. https://github.com/nstudio/xplat


I don’t see anything regarding mobile platform (native).

True, it's not possible because you need to write your code in Swift / Objective-C for iOS and Kotlin / Java for Android. And Nx only supports TypeScript.

I have no personal experience with this, but can't Kotlin target the JVM, native and JavaScript?

Yeah. We've tried this, also sharing typescript code, but we've decided that sharing the same business logic code everywhere just isn't worth all the extra work to do so.

Have you looked at ReasonML/OCaml? It certainly compiles well for web (bucklescript) and desktop. I haven't tried it for mobile but I would be surprised if it didn't work well.

The ReasonML native tooling (i.e. non-web) is evolving, but it's fundamentally sugar on top of a long solid history of OCaml.


I'd love to have ML expressiveness. The only thing i'm a bit worried about is the I/O abstraction level.

I'm not really looking for a GUI abstraction layer, but i'd like to be able to write to a file, or perform a network request, in a platform-independant way. I'm afraid this requires a little bit more than just a javascript transpiler target.


Reasonml is just an alternative syntax for ocaml so you can definitely use it for IO.

It's bucklescript that is an alternative backend for the ocaml compiler which allows you to output js, but you don't need to use it


That's where my understanding of compiler / Operating system / standard library falls a bit short.

I suppose the full ocaml standard library isn't available when you compile your code to native iOS or native Android, is it ?

I mean, iOS and and Android are probably not POSX compliant. File system and networking access have very specific constraints, that are way different than your regular server. At least that my feeling whenever i code in iOS. Your application needs to whitelist URLs in a plist, you need to work with foreground / background states (eg timer are paused when in background). I can't imagine that the standard ocaml library doesn't need adaptation to run in those environment.

Am i wrong ?


Clojure has a specification language that can express more and less things than a type system can and runs on mobile web desktop with flexible choices for run times

Good luck untangling re-native built on re-frame built on Reagent built on shadow-cljs built on React native built on React built on Android/iOS when one of those parts change.

You make it sound like more of a tower than it actually is. It's just ClojureScript to React Native. All the others are simple frameworks and build tool to help you scaffold.

so in that equation applies to all of JS transpilers. Why this a clojurescript specific complaint?

Because Clojurescript adds at least another 3 layers of indirection.

that can express more and less things

So which is it? ;)


Not OP, but I'm guessing they meant it can express more, but it can statically type check less. Since Clojure's system is a runtime contract and not a static type checker.

I'm pretty sure most languages at this point can be made to run across all platforms, including mobile. The bigger question is how gross your tooling/interop is going to have to be to make all of that work, and how much your productivity will be thereby affected.

I've recently built a Node server with TypeScript and it's a joy to use with external libraries when the types are available. It's such a time saver to not have to guess which method to call with which arguments (I've had only experience with dynamic languages before). Some libraries don't have types or they are outdated but it was a minority.

With the experience I've found that most of the type errors are actually between the backend and the frontend in web applications. It's still hard to fully type the entire flow from the database calls with the ORM to the objects manipulation in the frontend.

How are you dealing with that? We used Nexus with GraphQL but it was still a bit cumbersome.


We use https://graphql-code-generator.com/ in combination with some scripts which update the graphql schema at build time - works great. Using Apollo on the front end.

Having types straddle the client/server divide is a huge win


Almost every single Flow version upgrade is like this — every new version brings a slew of errors due to Flow’s continuous movement away from practicality towards “soundness”.

Agreed. It seems to be endemic to Flow's culture. We use it at my company and the signal/noise ratio of helpful to unhelpful errors hovers around 0.5. A handful of simple flags to relax checks in specific, extremely common cases would cut out 80% of that noise, but requests on GitHub have been shot down in the name of "soundness". Things like allowing null to be implicitly cast to a string, or assuming document.body doesn't need an undefined-check every single time you use it. TypeScript feels like it's written by JavaScript developers who want IDE hints. Flow feels like it's written by OCaml developers who loathe the very language they're adding types to, and are trying to make it something it isn't, with reckless abandon towards actual developer productivity.

This very well could be the reason Flow has been steadily losing mindshare in favor of a total TypeScript monopoly. TypeScript is great, but lack of diversity is a shame regardless.


How can null be implicitly cast to a string? I wasn't able to reproduce with: https://flow.org/try/#0PTAEAEDMBsHsHcBQiDGsB2BnALqAhgFyg4BOA...

Flow has built in support for the Language Server Protocol. tsserver doesn't even support LSP. I agree ts has better IDE support but that's in part because they don't use LSP. odd they expect others to follow it when they don't.


GP is saying they would like for flow to offer some leeway for things like this: https://flow.org/try/#0MYewdgzgLgBAZiEMC8MDaByBIMF0DcAUKJLNA... (i.e. not nag on a "technically-it-could-be-non-string-but-I-know-for-a-fact-that-this-is-fine")

LSP is kinda irrelevant to the argument that they feel that TS+VSCode is geared towards real-world productivity, vs the feeling that Flow prioritizes academic goals over devexp.


Correct. JavaScript is fundamentally flexible on certain things, and while it's very helpful to have some assistance with taming that wilderness, it will fundamentally never be perfect, and by fighting the language tooth and nail on things that don't matter, one only creates developer pain.

In plain JavaScript, 'foo' + null == 'foonull'. This isn't always what you want, but in the vast majority of cases if you're creating a string, it's:

1) For displaying to the user

2) Generating a CSS class or something to that effect

In both of these cases the above is quite a reasonable behavior, and will never cause a runtime exception, and very rarely even a business-logic error. But if I'm working with the result of a .find() or a maybe-property on an object type, I now have to add null checks (or at best || '' fallbacks) everywhere, muddling up my code and making it harder to follow.


Can you define an alternative to '+' that explicitly accepts null?

I’m not sure I agree. I’ve found updating Flow in a rather large monorepo a relatively straightforward process. The changes are usually rather small because the team releases every two weeks. They also manage the update internally within their company’s monorepo so they’ll usually find out about these types of unexpected behavior changes before the community does.

That said, we do rely on automated error excludes (similar to eslint-ignore-next-line) for things that cannot be fixed with codemods. Those errors were always there it’s just now you know about them. Better to stem the bleeding by updating the type checker to the latest version.


Eh. That hasn't been my experience. FWIW, having seen the codebase you're referring to, I don't consider a huge ignore list exactly a success story. The bulk of the work is merely deferred as tech debt at this point.

To be fair, many Flow upgrades are easy, yes. But some are absolutely nightmarish. The 0.85 upgrade was especially painful - it involved some very non-trivial codemods, we couldn't get it right in all cases, and it involved some loss of type safety as well :(

I've also found Flow to be painful to work with when dealing with a multi-repo setup which exports types from source code. It becomes unnecessarily hard to make libraries interoperable because a Flow upgrade in one project can cause a cascade of errors deep in another project's transitive dep. One basically has no actionable recourse other than waiting until all her deps are version-aligned on Flow (or never upgrading).

In that regard, the Typescript pattern of using .d.ts files as a boundary between a library's tsc version and the consumers's tsc version is quite nice and something I'd like to explore more w/ Flow.


> I’ve found updating Flow in a rather large monorepo a relatively straightforward process.

I'm not sure you're being entirely honest here, as we both know through feedback that flow updates in the said monorepo are one of the most burdensome processes for its contributors.

As a maintainer/owner of a monorepo experience, it's crucial to maintain empathy and honesty of how processes such as dependency and tooling updates affect (and are perceived by) its users.


Apologies, lxe. I can tell you're frustrated. If the the Flow upgrade process has been burdensome to others, I'd really like to know. I've always thought we've done a good job of updating internally on the platform team and that it hasn't been a concern for end users. We did run into some pain points in changing configuration options around the unnecessary optional-chaining lint rules but that wasn't related to updating the Flow version. I think you may be conflating those.

There's usually codemods you fan run to help with updating, and there are tools that add suppression warnings in places that error, which you can run after the upgrade. There's also a tool that removes unused suppressions.

In sorry but I would fire anyone that puts soundness in scare quotes. Yes, there are tradeoffs and yes, it's possible MS is doing it wrong, but soundness is well defined. In an industry that already has a dangerous apathy about all forms of correct, I consider this flippant tone inexcusable.

You kinda have to have worked with Flow to understand where this comment is coming from. Basically what tends to happen for Flow upgrades is that some upgrades throw a million new errors with incoherent messages. For things that are already verified to be working and in production. So an upgrade essentially becomes an exercise in making the type checker happy for its own sake, at the expense of not implementing features or fixing real bugs.

Typescript does not have soundness as a primary goal for a very good reason: there's a trade-off between soundness and productivity, especially when threading the large gray area between theoretical soundness and the realities of what constructs the compiler implementation is actually able to handle and what the existing ecosystem throws at it.

What the GP is lamenting is that a lot of changes in Flow create more work for developers, but they don't improve the rate at which it catches bugs in practice.


> but any time someone saves a Selection into a member variable, they ended up writing down whatever type TS inferred at that time, e.g.

> mySel: d3.Selection<HTMLElement, {}, null, undefined>;

I'm curious how Google and others approach adopting Typescript gradually, as I'm pretty new to it, I'm assuming it goes like: The programmer converts code to Typescript and when they come across return types they copy the inferred type and add it to the codebase directly wherever possible. I'm assuming just as a matter of using (untyped) libraries you need to rely on the output of Typescript in order to try have every return typed.

So the biggest problem seems to be how TS infers things changed meaning you can't always trust what you copied as staying consistent, even if the source library doesn't change itself. That's always something to keep in mind for overhead.


The explicit type is optional. It's not that it's required to copy out whatever the compiler inferred, it's just that lots of people do it to be explicit.

All types are optional in typescript. I was just curious about Google's approach to it and if they had some standard practice of always copying the inferred type to get a great amount of coverage.

Copying the inferred type is the same as copying your runtime outputs into your tests. It's a statement that you believe the result is correct and it's on you to make that judgement call. It's not a policy to blindly copy everything, because that defeats the purpose of type checking and testing, turning your tests into " verify that nothing changed", not "verify that the system behaves as intended".

Thanks, thats typically how I approach it as well, it's good to hear it written out. I try not to append copied types without fully understanding their structure either. But sometimes you just need to just trust it to solve a problem.

There's plenty of blog posts and docs on advanced types but I'm interested in the day-to-day best practical approaches people are taking adopting it. I should look around for some literature or talks on the subject...


So happy to see the attention Typescript is getting lately. Absolutely love the language, and have been using it since it got released.

I tried using it with Angular, but it doesn't seem to help much. For example, if you have something like:

    <button (click)="login(email, pass)" />
And then a TypeScript function like:

    login(email: string, pass: string) {
    
    }
TypeScript can't help you at all here because all the typing is determined at runtime by Angular. Even if `email` is a number or a boolean, no problem, it will just happily pass it in.

What benefit, then, does TypeScript provide? I understand it's compile-time guarantees, but how does that help if the types are coming in from HTML land which the TypeScript compiler doesn't examine at all?


For your template code in angular, there won't be any significant benefit to using typescript. It could be worse than not having typescript at all, since you can add type annotations to your 'login'-function that don't match up with reality.

That's not really a typescript issue though, and it works great with libraries that don't use string templates. From what I've seen the angular community hasn't really prioritized a typecheck-able templating language.


Nor does Vue. The focus is on typing the reactive data that feeds the templates which should be sufficient.

Plenty of things like HTML form elements take numbers or strings just fine, as it all outputs to strings in the end. Additionally, by breaking up stuff into smaller composable components there should be enough gating and typing layers, at least with Vue/Vuex that's the case. If it was just plopping straight into the elements 1-to-1 that might be a different story.


Glad to see someone pints this out. That’s the main reason I use React + Typescrip. JSX is an extension of javascript and can be fully checked while any template language is a custom invention that it’s hardly toolable.

Angular has been working on type-checking the templates: https://blog.angularindepth.com/type-checking-templates-in-a...

Disclaimer: I have heard about the above enough to dig up a random blog post about it but I don't know a lot about how it works.

I think in JSX (and I've seen experiments with lit too) there is also integration between the template and type checker.


Is there a way to migrate jsdoc-annotated JavaScript code over to TS, and is TS's minifier as good as Google's closure-compiler yet?

Visual Studio Code has a "Code Fix" that can automatically copies JSDoc into TypeScript annotations. This can be done to an entire file at once.

TypeScript does not come with a minifier. The code it produces is compatible with the target version of JavaScript (e.g. compile ES6 modules into CommonJS) but unminified.


I see. closure-compiler's "advanced" mode takes advantage of jsdoc type info for minifying so I guess generic syntactical minification won't compress as much for the time being.

We (Google TS team) maintain a tool[1] that transforms TS types into jsdoc types for the purpose of feeding them into 'advanced' mode. We also (to answer the grandparent question) maintain a tool[2] that converts Closure-annotated TS into JS. (Why both ways? We transition JS->TS and check it in as the code the user works with, while we use the TS->JS one within the compiler at optimization time.)

[1] https://github.com/angular/tsickle

[2] 'gents', in this repo https://github.com/angular/clutz


I don't know of prod-quality tools to migrate from jsdoc to TS. TS does have comment-level typing, so you can get the benefits without transpiling anything (similar to Flow).

TS pointedly does not minify output. It does the opposite: try to generate code that a human might have written.

It's very easy to incorporate TS into a Babel or Webpack build pipeline though, so you can use a purpose-built minifier of your choice.


The closest things I know of are Clutz and Gents, maintained here: https://github.com/angular/clutz.

They work with Closure-annotated JS (with JSDoc) though; not sure if it fits your use case.


The Angular project has a package for translating TS -> Closure, which appears to be used at Google in some capacity. https://github.com/angular/tsickle

Yes, we use that to use TS together with Closure for all our TS code at Google.

It's a bit difficult to hook up though, and we haven't had the cycles to make the open source version user friendly, I'm afraid :-(


It would be really cool to see the user-friendly version (though that might lead to a rabbit hole on the Closure side, too). The Closure compiler remains awesome, just difficult to make use of successfully.

You might try out https://github.com/theseanl/tscc , which aims to do this.

filter(Boolean) is a bad idea anyways. I was bitten by this before.

Why do you say so?

One reason:

This isn't TS-specific; it's common in Python too: coercing to Bool has language semantics (for whatever language you are in) which often don't match the application semantics of your program. Application programmers don't (and shouldn't have to, but for the language's over-eager coercions) always think about the boolean semantics of all their objects. In particular, None and empty/zero object are both False in Python, and Python style/linters push you to avoid explicit comparison to None, which gets weird when your application wants to treat empty objects as True because they have differen semantics from None. (For example, in a security function, None may mean no-op / fallback to default, but Empty might mean "Reject all".


Another reason that is TS-specific is that of JS functions can be highly variadic in the number of arguments and a lot of subtle runtime bugs can be found in blindly passing arguments without checking their count. If filter changes from returning only one thing to say two (for instance, an index count), Boolean may produce a runtime error for having too many arguments, may silently ignore extra arguments, may interpret an extra argument as changing the behavior, or some combination of all three depending on strictness versus compatibility level, executing browser, phase of the moon, etc.

Boolean itself I've not had trouble with, but things like map(parseInt) is the big one that bites a lot of junior developers all the time in TS/JS. (parseInt takes an optional second argument for radix, so in cases where the second argument returned by map is an index count, which is likely, you get it parsing in base-0, base-1, base-2, … which is almost never something you'd do intentionally.)


It is very funny to read this from Google, since we talked about similar problems with Google regarding Angular upgrades, where features were modified that Google considered not used/no use cases known, while we were relying on these.

From my point of view Core members of any super large project (like React, Angular, TypeScript) are limited by design in what they perceive as their target audience and their use cases. This is simply a matter of fact: even as a core dev you cannot know how every dev uses your product.

So this is some sort of left-pad moment for TypeScript.


Not a TypeScript user, but what really stood out to me is that Google are using a monorepo.

Be careful not to mistake Google's use of a monorepo with consideration of whether $DAYJOB or $FOSSPROJECT should use a monorepo.

Google has a lot of tooling and some very thoroughly-considered and reinforced policies and cultures around their use of a monorepo. Trying to use a monorepo without those tools and ingrained policies may not be very likely to lead to similar results.


The default should be keeping code together, and justifying why you are splitting it up, not the other way around.

While what you said is true, the codebase of most organisations is not large enough that they run into those scale constraints for a long time. Instead, if you split up your code you immediately get organisational headaches of managing changes across multiple codebases. If you keep it together, the scale challenges can be managed later, if they occur (c.f. YAGNI).

If you are splitting, codebases should be split across organisational boundaries, not technical ones. If you look at the FOSS world, the obvious conclusion is that each library is in its own repo, and that is partly true. In practice what is happening is that each OSS project is its own organisational team, and so it lives together. If you have parts of your company on different continents working on different things (and maybe you need different access) then splitting up may make sense. Otherwise, you're probably just making your life harder for little to no gain.


I think I'd more or less agree with that.

For example, one of the questions I generally ask in a meeting room that's considering this topic, and has for example microservices in flight, is: "Are you willing to put in the work to make Service A support more than one version at a time in Service B?", and if not, then that's a very (very) strong indicator that the level of coupling and lack of organizational boundaries will result in pain from anything but a monorepo.

I prefer to split things up when possible. When it does fit, there's many benefits. There's also the concern that building too much culture and tooling that presumes a lack of splitting of repos can become its own form of trap which becomes increasingly hard to navigate out of, even if you later want to, as the situation self-iterates. But I agree that splitting things out needs justification, and the "default" stance should include that.



Another interesting bit with the monorepo is the one-version-policy:

https://opensource.google.com/docs/thirdparty/oneversion/

That's why the typescript upgrade was so hard for them. We (attempt) to enforce a single version of a library/toolchain to be checked into the codebase at any given time. You can have multiple in during an upgrade, but it's highly discouraged.


This is also why Google says to test everything; even minor version upgrades can have unexpected behavioral changes. Without tests, these might break your project without warning.

On the contrary, having multiple-versions could have made the upgrade much worse, by deferring compatibility problems from submit time to deploy time.

It's a trade-off. As a user of a third-party library, I would like to upgrade it to get new functionality. But there are breaking changes in the update.

So to upgrade I would have to fix all users of the library to upgrade. While this is better overall for the codebase, it can put a lot of work on others for a not well maintained third-party library. Something like TS has people that help keep it updated. But for something more obscure, it'll be on someone else who cares enough to put in the work.


Google is pretty notorious for this. It is one of the reasons behind the old golang GOPATH setup, and one of the reasons it took so long for the Go taking so long to get modules.

I'm not sure that's true. The layout of Go code in Google's monorepo is not at all similar to GOPATH, and patterns that are common within google3 (such as multiple Go packages in one directory) are fundamentally incompatible with the Go build system. As an ex-Googler, I still strongly prefer the google3 style and am annoyed when open-source Go tooling can't deal with it properly.

what is google3 style?

I think it's the mentioned

> multiple Go packages in one directory

as opposed to having a 1:1 package:dir mapping


How does that work on a nodejs project? I understand that they only have one version of the TypeScript compiler for all projects inside the monorepo, so that means there's only one huge package.json inside with all the packages used by every project inside the repo?

The lingua franca for Google's building needs is (more or less) bazel, where you say target /a/b/c depends on /dep/v1_1, /dep/xyz, /common/foo, etc.). (There is a filesystem-like hierarchy parallel to, but not necessarily the same as the corresponding repository's directory layout.)

Bazel is extensible via rules [1], so if you really wanted to use NodeJS on your team, you might create a `nodejs_binary` rule that put everything in the right directory and ran some NodeJS packager on it. You'd probably not put it into production.

Also, third-party code lives in a single third-party directory, so yes, internal users could pull down code they wanted (and for which there wasn't a satisfactory internal version already) into that directory: https://opensource.google.com/docs/thirdparty/

[1]: https://docs.bazel.build/versions/0.29.0/skylark/rules.html


that stood out to me too... not the monorepo really, but the fact its a monorepo of a billion lines of code. seems almost impossible to maintain when you have a dependency change at the lowest levels that affects numerous projects.If i am understanding google was in a situation where they had to update every project that used typescript inside the entire company at the same time, that seems untenable.

Whether it's a monorepo or not doesn't change the fact that you have a billion lines of code to maintain, but at least this way they're forced to make changes consistently. The alternative would be doing it piecemeal with a dozen versions of dependencies propagating due to fragmentation, which is way way worse.

Here's an article from a maintainer of a framework library (Angular): https://medium.com/@Jakeherringbone/you-too-can-love-the-mon...

Something like this might only be possible due to their tooling and test coverage. So when you change something, you immediately get alerted of broken tests.


Mostly it means that you have to be really thoughtful when introducing a breaking change to a low level library.

Or you can YOLO / Leroy Jenkins the change and let everyone else fix the breakage you create, or see if they demand a rollback.

They will not only demand a rollback but also do it. Besides, low level changes at this scale require a copious amount of approvals.

Google is using the monorepo ;-)

I'm really curious how many lines of TypeScript are in use at Google. I bet most JS is still Closure.

Google sure has a lot of comments these days



Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: