Hacker News new | past | comments | ask | show | jobs | submit login
Tips for Performant TypeScript (github.com/microsoft)
281 points by nkjoep 52 days ago | hide | past | favorite | 113 comments

Hi all, original author of the wiki page here. Please take the advice with a grain of salt. Specifically

- union types aren't always bad, DON'T avoid them

- DON'T feel the need to annotate every single thing

Please apply a critical lens as you read through this page. The document was meant for users who've hit rough perf issues, and we tried pretty hard to explain the nuances of each piece of advice.

In spite of the HN headline, I'm glad the wiki page doesn't use the word "performant."

However, I was surprised that "performance" refers to compilation/editing speed rather than execution speed.

From someone who is working on a couple of now larger Typescript apps (both frontend and backend) I’ve began noticing compilation take long enough that I have a currently low hanging (but will obviously increase in priority) TODO to go through and refactor to improve compilation speed. I wouldn’t know how to google for this other than using the word “performant”. Although I realize in most cases performance specifically deals with production execution and what end users experience/perceive, I at least believe performance is not incorrect to describe what the author is helping with here.

> TODO to go through and refactor to improve compilation speed. I wouldn’t know how to google for this other than using the word “performant”

But you already used the words naturally in your sentence, "compilation speed". A quick google search brought up lots of useful results with 'typescript compiles slow', 'typescript compile faster', etc.

It makes sense to me since TypeScript compiles but otherwise does not execute. If you want faster execution speed don’t do ridiculous things in your JavaScript.

A personal app I am working on is about 2mb of TypeScript now and takes about 6.5s to compile on my laptop.

I agree compilation speed really only affects the developer in the build chain. Execution speed affects the client and the end user. Seems like the focus should be more on the latter.

Do you have any tips on diagnosing what a problem might be? I don’t know how to interpret the diagnostics flag output to actionable changes to my company’s code, and while I can blindly do what the wiki article suggests (found it a while back when trying to figure out what to do) I would much prefer if I weren’t just trying time consuming changes to our large codebase randomly... been stuck with slow compile performance with typescript for almost a year now and I can’t tell what I’m supposed to do, or if the TypeScript compiler is just too slow.

Thanks for these tips; really interesting. Getting a clearer understanding of the differences between types and interfaces, and got some confirmation of the merits of writing explicit return types.

I was wondering whether any more could be done to improve editing performance when large .d.ts files are included. This is a problem in particular for NativeScript, which has a vast set of large types files to include to express the entirety of the iOS and Android SDKs, e.g.: https://github.com/NativeScript/NativeScript/tree/master/pac...

In fact, the skipLibCheck flag was originally developed to improve NativeScript compile time: https://github.com/microsoft/TypeScript/issues/8521

Unfortunately, editing still feels slow to me when including NativeScript’s iOS/Android types (have to wait 1-2 seconds after any keystroke for any IntelliSense to appear); beyond including fewer of the types files, could editing performance be improved somehow?


Of the three code-related sections, I think only Using Type Annotations makes sense. While the compiler _can_ infer the return type and the user can mouse-over the function to see what the language server has determined the type to be, I feel that explicitly noting what the return type is preferred. Yes, the compiler can act more quickly, but also it makes it more clear quickly to others working on the same project what the function does. Even in languages like Swift which are happy to use type inference, you still must annotate your functions.

The other two code-related sections seem odd, to write code that improves compile-time performance. It would be beneficial to see compile duration differences between projects that heavily use union types and projects that don't. Otherwise, changing your coding style and not using explicit features of a language that are hard to find in other languages seems counter-productive.

That said, the actual compiler configuration changes that follow seem very useful, from someone who doesn't write much TS.

> Otherwise, changing your coding style and not using explicit features of a language that are hard to find in other languages seems counter-productive

As with most optimization suggestions, I take these to be intended as a remedy when you're actually running into problems, not something to be done eagerly. I've never run into significant cross-project TypeScript performance issues personally, but I have heard of that happening to some people.

It does explicitly say this at the top:

> The earlier on these practices are adopted, the better.

Technically, yes, but we don't expect most users to stumble onto this page unless they're already hitting perf issues.

Haha, good thing it's on HN now then :P

Hum. I missed that. That does seem unideal.

My only issue with this is that it introduces the possibility for human error. It’s rare, but if the returned object fits more than one type (say, a superclass vs concrete class instance) the incorrect one could be selected and the code still compile. Is this even a valid concern?

The only time I see manual type annotation cause problems consistently is with React.FC<Props>: (props: T). People don’t always remember to provide their props interface as the generic, and instead directly annotate the props argument of the function. This is a subtle issue that breaks the “magic” props added by React (like children), leading to people adding their own children definitions to their props interfaces d’oh

I personally find that manual return type annotations actually prevent some errors. A common case: I forget the return statement in one branch, and TypeScript is happy to infer something like number|undefined as the return type.

>leading to people adding their own children definitions to their props interfaces

IMO this is a feature not a bug. Type definitions aren't just for the compiler, they're also for the developer. Being able to see at a glance which components expect children and which don't is really valuable. Not to mention that there are situations where I want to restrict what kinds of children can be passed in (think render-props, or named slot projection patterns).

In other words, just because React supports an implicit definition of what a "child" can be doesn't mean that my specific component supports all of those same possibilities.

I see your point, and agree under the condition that I trust everyone contributing to the codebase knows it. But, in reality, they don’t. I’d rather have the children type available but unused most of the time than one-off type definitions of children.

Maybe for you my own projects I’ll employ your approach, because I agree from the fundamentals side of it

Would be really helpful if there was an autofix suggestion available so that LSP can add the return type for you.

I'd love it if typescript could one day grow a mechanism to allow it to rewrite closure definitions to contain the inferred type declarations -- some kind of keyword to indicate "this return type will be re-inferred by compiler based on the call sites within local scope" and have it integrate with ide "rewrite on file save" infrastructure.

So you get minimal keyboard typing when passing around inline closures -- don't have to write the types yourself or maintain them as the code changes -- and any changes to the inferred return types would be visible in source control with diffs that provide quite rich information about changes that might have done something unexpected or propagated further than realized

The default eslint/prettier settings require return types for this very reason. Readability and not having to tab between files to figure out a return value is surprisingly helpful for your overall dev velocity.

Worth emphasizing the first sentence:

faster compilations and editing experiences

i.e. not runtime perf.

I assumed it was going to be about compilation as the subject is TypeScript, which feels like a safe assumption. I suppose there could actually be runtime consequences of TypeScript too, as you’re still writing runtime code, just through a game of telephone with TypeScript :p

> I suppose there could actually be runtime consequences of TypeScript too, as you’re still writing runtime code, just through a game of telephone with TypeScript :p

Right. One of the first things you learn while doing TypeScript is that interfaces don't exist after compile time. So in general, it's better to write interfaces and use plain-old-objects than to use `class`, which generates real JavaScript code.

Within reason, of course. Classes are still useful. But it's not necessarily the first thing to reach for.

Well, of course not. You can't "run" TypeScript.

No--but certain constructs could translate to slower than expected execution speed.

That's something that needs proof from a profiler. Intuition on what's "slower" is really awful, particularly for JITed languages.

I don't think you could give general advice on what is slower as that's a constantly moving target.

If TS could compile to asm.js you certainly could. (The fact that it can't is a big reason why I believe it's a fundamentally bad solution to writing JS scaleably.)

I think you mean Wasm; asm.js is dead.

Looks like assemblyscript compiles "a strict variant of TypeScript" directly to Wasm:


This is pretty close to "running" TypeScript.

I tend to refer to asm.js and wasm interchangeably, but yes, wasm would be the technically preferable term.

Never heard of AssemblyScript, but yeah, it or a statically typed language like it is where frontend development needs to go. In concordance with my point about TS being fundamentally unsuitable, the AssemblyScript docs (https://www.assemblyscript.org/basics.html#strictness) say:

> WebAssembly is fundamentally different from JavaScript, ultimately enabling entirely new use cases not only on the web. Consequently, AssemblyScript is much more similar to a static compiler than it is to a JavaScript VM. One can think of it as if TypeScript and C had a somewhat special child.

> Unlike TypeScript, which targets a JavaScript environment with all of its dynamic features, AssemblyScript targets WebAssembly with all of its static guarantees, hence intentionally avoids the dynamicness of JavaScript where it cannot be compiled ahead of time efficiently.

You can't "run" C, or Go, or Javascript, etc.

I think the parent comment meant that typescript is a static checker, which means it's not exercised at run time but in a prior step. Therefore "performant typescript" means to shorten the time it takes to perform these static checks, not the time it takes to run your code. In contrast, when people talk about optimizing C code they most often (but not always, of course) mean to write code that runs fast.

I find the mentality around TypeScript to be bizarre.

"It's just a type checker for JavaScript" - not really. It's a language that transpiles to JavaScript and it happens to be a superset of JavaScript (namespaces and enums, anyone?).

But saying that it's just a layer on top of JavaScript sounds like saying C++ is just a layer on top of C.

> C++ is just a layer on top of C

C++ used to be just a layer on top of C, but not anymore. If it still was, it wouldn't be a bad thing to say.

Type checks that run during compilation/transpilation/typechecking do not "run" as the code is executed. This is what happens with Typescript, and that's what the original commenter was saying.

I'm not sure I understand your point. Typescript is a language with a type checker and compiler. It's the same thing with C. Typescript has to be compiled to run. C has to be compiled to run. When I think of language performance I always default to thinking about runtime performance.

> I'm not sure I understand your point.

When people talk about C performance, they mean the performance of the code written in C and (ultimately) compiled to machine language. Very seldom they mean the speed of compilation (though they sometimes do, in which case they almost always explicitly mention "compilation").

When people talk about Typescript performance, they either mean Javascript performance or, more likely, the speed with which Typescript type-checks their code; the latter doesn't happen in run time. That's what the original commenter meant when they said "you cannot run Typescript". It's also what TFA means by "performant".

Deno[0] supports running TypeScript without needing to compile it to JS.

[0] -- https://deno.land/

Deno still compiles it to JS. It just does it for you.

Which at one point gets compiled to machine code. At one point, you have to stop and say, "Yeah, this does effectively runs TypeScript", otherwise as I said, you'll end up saying nothing gets run but machine code. While that's correct, not very useful.

I think the key point in this case is that the performance characteristics of Deno don't significantly differ from TypeScript independently transpiled to JavaScript and run in Node because the compilation pipeline is more or less the same (Deno is using tsc and v8 internally)

I think the distinction will hold when you can access type information at runtime

Why? Haskell doesn't even have runtime type information. So what would you call the stuff that runs after I compile my Haskell code? Not-Haskell?

That’s a topic not related to typescript, it’s javascript after all.

In practice I don't think there are, but it's not inconceivable that there could be performance considerations specific to how TypeScript generates JavaScript.

That's rather sad, union types are really what I like most about TypeScript. This might explain why VS Code feels so slow sometimes when type checking, because I have a few types that rely heavily on unions.

> However, if your union has more than a dozen elements, it can cause real problems in compilation speed. For instance, to eliminate redundant members from a union, the elements have to be compared pairwise, which is quadratic. This sort of check might occur when intersecting large unions, where intersecting over each union member can result in enormous types that then need to be reduced.

This statement makes me think... how come the TS compiler is not using something like a hash/map (object) of union members to basically ignore redundancy?

Or any other strategy really. The union of unique values in 2 or more arrays is a classic CS problem for which there are many, many performant solutions.

Anyone familiar with the TS internals? Maybe I'm not seeing the forest.

What you’re not seeing is that TS is a structurally typed language, not a nominally typed language. Two type signatures defined in different places with a different name may be assignable to each other - so typescript usually has to deep-compare, recursively, the two types until a conflicting field is found.

So just hash the structures?

TypeScript team member here: the structures are arbitrarily deep and recursive. Also, as explained in a sibling comment, it's not just about type identity, but type assignability.

Hashing lets you know whether it's the same type or two different types, but you still need to look at the members to find out whether one is a subtype of the other.

> This statement makes me think... how come the TS compiler is not using something like a hash/map (object) of union members to basically ignore redundancy?

The trouble is the operation isn't "is X a member of Y", rather it's "does X match any values of Y according to predicate P."

You can break that out if you have knowledge of possible X's and P, as is the case with type matching.

Say we are checking G[] against {str, "foo literal", int[]}. I have no idea how TS implements these internally, but say the underlying type expressions are:

    [{head: String}, 
     {head: String, constraint:"foo literal"}, 
     {head: Array, element:{head: Integer}}]
And G[] is {head:Array, element: {head: Generic, name: "G"}}.

We could reasonably require that heads are all simple values, and then group types as such:

    {String: [{head: String},
              {head: String, constraint:"foo literal"}],
     Array: [{head: Array, element: {head: Integer}}]}
You'd still have to try to match that generic parameter against all the possible Arrays, but you could add more levels of hashing.

The downside is, of course, it's quite tricky to group types like this and prove that it returns the same results as checking all pairs, especially when you have a complex type system.

I've found VS Code to be pretty slow regardless of what I've used it for. The UI is snappy, but the hinting/linting/type-checking is sometimes shockingly slow.

The only reason I've used VS Code for more than an hour in the last few years is because its Svelte plugin was much better, but now JetBrains has a good Svelte plugin, and I'm back to JetBrains 100% of the time. It's worth every penny.

Yea, I've been using VSCode these past couple of weeks as I'm now coding in Python + JS after a few years of only JS with WebStorm... and I'm buying a PyCharm license tomorrow.

VSCode is fantastic in many ways, and I'll keep using for my markdown dev notes and for general purpose programming occasionally, but for anything of substance I'm a JetBrains convert.

In this case it's literally the LSP and maybe JetBrains have a different way of doing all of this more efficiently or it's just that many LSPs are not written in the most efficient language.

We're beginning to see more Javascript tooling that are written in other languages such as Go and Rust that just blow away existing tooling in terms of performance. Look up on esbuild/swc.

Last I heard JetBrains is also looking into adopting LSP so maybe we could even pay for JetBrains LSP to use with VSCode some day

id run some benchmarks first on your codebase. would be surprised if union types were a top bottleneck.

How do you do that? My company’s 60kLOC typescript codebase takes several minutes to compile and I don’t know how I’m supposed to diagnose what the problem might be; diagnostics flag exists but I don’t understand how I’m supposed to take action on it. Current plan is to break the project into a lot of smaller project references but yeah compile speed has probably taxed my company significantly in productivity and I don’t feel confident about addressing it.

Your response tells me you definitely are the type with the talent to solve it, but yeah it will require some brute work but I’ll think you’ll come out of it a lot stronger (and help your team a lot). “tsc — extendedDiagnostics“ is a great place to start. tsc --traceResolution > resolution.txt. both from the OP. I’d get a book pronto on advanced typescript as well. Create a tiny Ts project from scratch and use that as your comparable. Measure your progress and have a beer or ice cream every time you make a jump. I worked on one huge typescript project years ago at MSFT with slow compile speeds, and was always too scared to try my hand at fixing that, and sort of regret it (though it was a hard problem since we were on windows ;) )

Note how some of these are directly at odds with writing easily maintainable code. For example, using type annotations for return types [0]:

  - import { otherFunc } from "other";
  + import { otherFunc, otherType } from "other";
  - export function func() {
  + export function func(): otherType {
        return otherFunc();
Not manually annotating the return type here reduces both the work you need to do when refactoring and the visual overload when working with the code. In my opinion, both of those are far more important than small changes in compile time.

[0] https://github.com/microsoft/TypeScript/wiki/Performance#usi...

I honestly find it so annoying when return types aren't annotated. It's considerably less overhead for me when I can look at the signature and see the return type, even if a few characters are added.

Agreed. I think Rust had the right approach here: require type annotations for function parameters & return values, but perform type inference within functions. That makes it obvious when a function's public interface (so to speak) has changed.

Not really, rust is unusable for interactive programming because of this reason. I understand that's not rust target domain but still there's downsides to be had with their approach.

You mean like a REPL? Most languages are unusable for interactive programming. And it's not a downside if it stops use in a domain that was nobody's goal.

Yes, just pointing out the trade offs here. Btw many functional languages have quite good interactive experiences, (ocaml, f sharp, Haskell , clojure all have decent repl's). Usually they work instantly for small sized projects.

Have you ever seen a procedural language with a good REPL? Python maybe, but definitely no compiled procedural garbage-collectorless language. The "needing to put type signatures" is completely unrelated. I have no idea how that would stop Rust from being good in a REPL.

Sure: https://cdn.rawgit.com/root-project/cling/master/www/index.h...

Can’t speak to its quality, but there’s nothing stopping someone from writing a repl from a sufficient compiler API... and “good” is only limited by inference quality and runtime performance. IDEs are pretty snappy at showing you autocomplete as you type regardless of whether the language has a garbage collector. And performance - well, that’s what caching would be for, and an optimized compiler design that only needs to recompile changed code...

I would also point out the “auto” keyword in CPP likely saves folks a lot of typing ;-) I know it and similar inferences changed my mind on the whole static vs dynamic debate...

>rust is unusable for interactive programming because of this reason

I'm not sure I understand, could you elaborate more?

Lots of current and potential issues around interactive programming in Rust are outlined and discussed here: https://github.com/rust-lang/rfcs/issues/655

Exactly, and if I declare that a function returns a string, and then edit its code and add a return statement that returns a number by mistake, the compiler will tell me right away.

If I didn’t declare the return type then the function will silently now infer the return type to be “string | number” (which might or might not break compilation elsewhere).

My experience with f sharp and recently with the new Haskell langauge server has been that not having written type annotations is a no issue because the language tooling still shows them above the function definition and better yet can autogenerate them.

So I think this is more of a tooling issue, do keep in mind global type inference is really handy for interactive programming in the repl and short scripts.

Type inference

I've not had the chance yet to play with OCaml or Haskell in earnest, but it's on my list.

One thing that I'm fascinated and terrified by is the global type inference.

Doesn't it get really hard to figure how how you're allowed to call things?

Does it make your IDE experience slow? Similar to one of the things mentioned in the OP, I would think that the type hints would be super helpful to the compiler/analyzer.

I would say it works surprisingly well 90% of the time. The main help type hints give to the compiler is that it can generate better error messages for when things do go wrong, apart from that there is no major difference.

As I mentioned earlier it's a powerful feature, and has its uses, especially when you are still trying to figure the types of your program.

For an easier taste of global type inference you can try elm language, it's also a good stepping stone to learning haskell

Looking at the votes on my comment jump up and down, I was very surprised to find that my opinion is apparently very controversial. I find this very intriguing, and I'd like to hear from your side. From your perspective, doesn't IDE intellisense and the like cover that use case? Or do you prefer to have all of that information visible all the time?

VSCode shows return type on hover, and could easily show it all the time via an inline secondary notation, if desired. no need for more tokens.

The problem is that the type it infers is not necessarily the type you want to be exposed on that export. And then it's quite possible to have a bug in implementation of the function that results in a type that's outright wrong.

Types are a subset of contracts. Contracts are best explicit at the API boundary - which is to say, on exported functions and members of exported classes.

Agreed—there are times you definitely need to specify the type. I have them every week, but still find it the exception rather than the rule, and easier to adjust for those, than adjust for the more common case.

Annotated return types are one of the single most prominent readability benefits of typed syntax. Knowing with certainty what type will be returned in a single brief glance, without the cognitive overhead of parsing function body and checking various return statements (at best: returning a variable obtained from an external source may obscure type further).

For me this also greatly improves my ability to refactor quickly and confidently (not needing to check additional places/files during refactor for edge cases in expected type).

Yes any type-related refactor mistakes should in theory be picked up at compile time, or by smart IDEs but having the context in front of you still greatly improves speed when writing.

It's also great for code reviews where grokking context is trickier and IDE goodies are typically not as rich.

> in a single brief glance

this is something VSCode/editors could always show if desired via secondary notation, without requiring annotations.

IDEs are great and all, but their features should be an augmentation of the usefulness of the language, not a requirement.

I've already mentioned code reviews as a good example of contexts where we need to read code outside of an IDE regularly, but even beyond that, a language's readability shouldn't be dependent on using some specific environment to read it.

I have mixed feelings on this.

On the one hand, I'm like you. I don't want to be forced to use a specific tool to work with a programming language.

But I can't quite put my finger on why. Like, what if we just say "language Foo includes a compiler and the working environment is this IDE"? Is that wrong? It kinda feels wrong. But that's what Smalltalk does, right? Maybe the tight integration would actually be better.

The LSP pattern allows for bringing anything you’d find in an ide to other targets (like code review tools, as used by GitHub).

I used to be a language purist, but nowadays the costs of not using an ide or lsp supported tools is just too high. I’d prefer minimal tokens and abundant secondary notations provided by parsers than having to add clunky syntax myself.

> language purist

Maybe I'm in a minority but calling anyone using type annotation "language purists" seems a bit of an extreme classification.

I meant I used to believe that a language should be designed to be used notepad first. Now I believe it should always be usable in notepad as a fallback, but the design of the language should be heavily influenced by what capabilities IDEs/LSPs can bring.

So IDEs first but never violate the rule that it's easy to fix something in vim or notepad in a pinch.

I'm going to have to disagree here.

The return type is part of the contract for what a method does. Changing it is just as dangerous as changing the type on a method parameter.

I agree that using type inference locally is a boon to productivity. I disagree that it's a negative for a method return parameter.

Keeping it fixed prevents a future refactor from breaking the method expectations and contracts down the line.

I would argue the opposite: Omitting the return type makes refactorings significantly riskier, and code less readable in my opinion. Leaving out types should only be done in trivial, small-scope areas of code. When done on an exported, widely-used function, this smells like "write-only" code and is hard to maintain.

I don't understand this at all. Type annotations absolutely make a code base easier to maintain, not the opposite. If you are in the "types are overhead" camp, why use typescript at all?

There's a distinct difference between type annotation and type information. In the case shown above, as long as `otherFunc()` is strictly typed, the manual annotation is simply duplicating information that's already there. At no point did I argue against having types, I'm opposing needless duplication. As perhaps a simplified example, annotating `Foo` in the below example is similarly redundant:

  const foo: Foo = new Foo();

This is over simplified. Using TypeScript you should specify all the types required to have your code base fully qualified - not more. In the example above having a fully typed `otherFunc` would directly specify the type on `func`. If you (automatically) refactor `otherFunc` the type naturally flows and you don't end up with (unnecessary) modifications on other spots. Use type inference when possible, but don't over do it.

Typescript is perfectly capable of telling you what the return type is without needing you to manually specify it.

It tells you what the function return type is, which isn't necessarily the same as the return type you might have manually typed or intended. For example, undefined's sneaking into the type because of branching code.

I think this can go both ways. There are lint rules that encourage explicit return types for exported functions and public functions, because return types are the interface between modules. As your codebases grow, having clear relationship and contracts between modules becomes more important. This goes doubly for modules distributed as NPM packages - you want the compiler to tell you when changing the body of an exported function constitutes a breaking change to your module’s API - and the easiest way to make that mistake is to change an inferred return type, and conversely the lowest-hanging fruit to prevent that error is to lint for explicit return types.

I agree. You do need to check the inferred return type (sometimes it will infer a union or new interface you didn't intend), but it helps with refactoring and encapsulation to not add those annotations.

If performance is enough of a problem to warrant a post like this, I'm a little surprised that tsc is itself still written in TypeScript. I get the benefits of that, but it doesn't seem super uncommon that the project gets pushed to its limits these days.

It feels like some of these suggestions reduce the readability and maintainability of the TypeScript, as well as rail roading you into patterns that perhaps don't suit your architecture. Manually annotating return types of functions rather than letting them be inferred, and preferring interfaces to intersections for example.

I think I'll largely continue to write my TypeScript in the way that seems best to me using the syntax provided to me, and let MS hopefully optimise these issues under the hood over time.

We've encountered huge performance problems using TypeScript with the styled-components library. Something about how that library is typed results in multi-second delays before the VSCode intellisense updates on any file in our project. It's absolutely agonising.

Yup! Among other type definition problems. The types are very poorly maintained.

I’m planning to give Emotion a try as it has the same ‘styled’ api.

Yeah, I had the same problem, kind of made me switch to Tailwind, actually. The places where I used dynamic rules based on props were replaced with dynamic classes.

For people who have not used TypeScript:

I've never had issues with the TypeScript compiler being slow, and I write my types with as much complexity as the underlying data/situation demands.

I have several projects that consume dozens of REST APIs, and I have types for all of their requests and responses, and my compilation time is still fine.

I’ll match your anecdote with my own. I’ve worked with several large codebases in TypeScript and slow compilation is a common problem I’ve dealt with. To the point where some people I’ve worked with specifically chose to work on desktop PCs for the extra CPU.

With that said, compilation performance is something most compiled languages deal with so I wouldn’t treat it as some kind of flaw specific only to TypeScript

Only problem I ever had is some weird deep nested graphql autogenerated types. Otherwise it just works.

In chatting with users, I've heard the generated types vary in quality by a lot depending on the tool you pick. Supposedly this one is pretty good: https://graphql-code-generator.com/docs/plugins/typescript

Interesting- they mention an example of adding a return type if you've "identified a slow part of your code". How would you identify that the complier is struggling in a particular area?

Glad you asked! We're trying to find good ways to help people investigate this. One of my colleagues wrote up on his work on TypeScript 4.1's `--generateTrace` flag: https://github.com/microsoft/TypeScript/wiki/Performance-Tra...

That page can give some guidance on diagnosing what the compiler's time is going into. Most users won't need to do this, but there are plenty of bigger teams/organizations that are really invested into TS and do want to keep their build times lean.

> Type inference is very convenient, so there's no need to do this universally - however, it can be a useful thing to try if you've identified a slow section of your code.

Can we actually identify which piece takes longer to compile?

I think you can in 4.1 add --generateTrace and the output path. The result you can inspect in the browser. Also there's -- extendedDiagnistics if I remember correctly that tells you which part of the compilation process takes what time

This should be abstracted away, optimizing for compiler performance feels straight out of the 80s.

> optimizing for compiler performance feels straight out of the 80s

I think you're misunderstanding this situation based on the suggestive word, "compiler".

TypeScript's type system is Turing-complete. That means you can theoretically write programs with the type system that cause the compiler to never complete.

Instead of thinking about this as optimizing a compiler, think of it as optimizing runtime performance of a program by better understanding the language that the program is written in.

I've run into many more performance issues with editor latency (i.e. tsserver) than with compiling (tsc). Unfortunately, these tend to be harder to diagnose.

The worst turned out to be a crashing bug in tsserver that manifested as autocomplete and error checking feeling sluggish, I think because the language service kept restarting. It made me feel grumpy about TS for a month or two before I finally looked at the server logs (using the instructions on this page). That quickly led me to this bug: https://github.com/microsoft/TypeScript/issues/35036. Even though it took a while to get fixed, I felt immensely better that the problem was real, acknowledged, etc.

Nowadays my biggest performance issue is with the Material-UI typings.

I had much hope for project references with --incremental and --watch mode to be quicker than using external tools, but they still seem to recompile a lot (and so take a long time). Or maybe just because I am some tsc versions behind?

I had the same experience on Typescript 3.4.5. I hear that project references with incremental/watch are more performant on newer versions. (Notion is stuck on 3.4 because of this issue: https://github.com/microsoft/TypeScript/issues/31445)

Would you care to share a reproduction - or the actual thing - of the problem this behavior creates in your codebase?

I'm sure you've already spent time on this, but it piques my interest that there isn't a good solution to the issue.

Please use the right design idiom for the project unless it's literally make or break on performance, which should be very rare.

Some of these things can be used, yes, those that don't change the approach to the project absolutely, but otherwise ... stick with 'the right pattern' for the architecture.


There are however, 20 matches for `Performance` including the header and not including the URL.

Is your objection that the word performant does not match your arbitrary criteria for a `real word`?

I don't want to put words in your mouth but I can't figure out any other meaning behind your comment.

History tends to look unfavorably on the stance of _____ is not in the dictionary so it's not a real word. While it can initially be a popular argument along the lines of "They're outsiders because they use a new word and therefore we should bond over hating them" it tends to very quickly start looking like "Old man shakes fist at cloud".

Time is a performant system for yeeting this kind of elitism.

To be fair, check out GP's username.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact